{"prompt": "Today, I’m speaking with Dr. Paul Christiano. Paul recently completed a PhD in theoretical computer science at UC Berkeley and is now a researcher at Open AI, working on aligning artificial intelligence with human values. He blogs at ai-alignment.com. Thanks for coming on the podcast, Paul.", "completion": "Thanks for having me."} {"prompt": "We plan to talk about Paul’s views on how the transition to an AI economy will actually occur and how listeners can contribute to making that transition go better, but first, I’d like to give you a chance to frame the issue of AI alignment in your own words. What is the problem of AI safety and why did you decide to work on it yourself?", "completion": "AI alignment, I see as the problem of building AI systems that are trying to do the thing that we want them to do. So in some sense, that might sound like it should be very easy because we build an AI system, we get to choose all … we get to write the code, we get to choose how the AI system is trained. There are some reasons that it seems kind of hard to train an AI system to do exactly … So we have something we want in the world, for example we want to build an AI, we want it to help us govern better, we want it to help us enforce the law, we want it to help us run a company. We have something we want that AI to do, but technical reasons, it’s not trivial to build the AI that’s actually trying to do the thing we want it to do. That’s the alignment problem.I care about that problem a lot because I think we’re moving towards a world where most of the decisions are made by intelligent machines and so if those machines aren’t trying to do those things humans want them to do, then the world is going to go off in a bad reaction. If the AI systems we can build are really good at … it’s easy to train them to maximize profits, or to get users to visit the websites, or to get users to press the button saying that the AI did well, then you have a world that’s increasingly optimized for things like making profits or getting users to click on buttons, or getting users to spend time of websites without being increasingly optimized for having good policies, heading in a trajectory that we’re happy with, helping us figure out what we want and how to get it.So that’s the alignment problem. The safety problem is somewhat more broadly, understand things that might go poorly with AI and what technical work and political work we can do to improve the probability that things go well."} {"prompt": "Right, so what concretely do you do at Open AI?", "completion": "So I do machine learning research, which is a combination of writing code and running experiments and thinking about how machine learning systems should work, trying to understand what are the important problems, how could we fix them, plan out what experiments give us interesting information, what capabilities do we need if we want to build aligned AI five years, 10 years, 20 years down the road? What are the capabilities we need, what should we do today to work towards those capabilities? What are the hardest parts? So trying to understand what we need to do and then actually trying to do it."} {"prompt": "Makes sense. So the first big topic that I wanted to get into was kind of the strategic landscape of artificial intelligence, safety research, both technical and, I guess, political and strategic. Partly I wanted to do that first because I understand it better than the technical stuff, so I didn’t want to be floundering right off the bat. What basically caused you to form your current views about AI alignment and to regard it as a really important problem? Maybe also, how have your views on this changed over time?", "completion": "So there are a lot of parts on my views on this, it’s like a complicated pipeline from do the most good for the most people, to write this particular machine learning code. I think very broadly speaking, I come in with this utilitarian perspective, I do care about more people more, then you start thinking, you take that perspective and you think that future populations will be very large, you start asking, what are the features of the world today that affect the long run trajectory of civilization? I think if you come in with that question, there’s two very natural categories of things, there’s if we all die then we’re all dead forever, and second, there’s sort of a distribution of values, or optimization in the world, and that can be sticky in the sense that if you create entities that are optimizing something, those entities can entrench themselves and be hard to move. In the same way that humans are kind of hard to remove at this point. You try and kill humans, humans bounce back.There are a few ways you can change the distribution of values in the world. I think the most natural, or the most likely one, is as we build AI systems, we’re going to sort of pass the torch from humans, who want one set of things, to AI systems, that potentially want a different set of things. So in addition to going extinct, I think bungling that transition is the easiest way to head in a bad direction, or to permanently alter the structure of civilization.So at a very high level, that’s kind of how I got to thinking about AI many years ago, and then once you have that perspective, one then has to look at the actual character of AI and say how likely is this failure mode? That is what actually determines what AI is trying to optimize, and start thinking in detail about the kinds of techniques people are using to produce AI. I think that after doing that, I became pretty convinced that there are significant problems. So there’s some actual difficulty there of building an AI that’s trying to do the thing that the human who built it wants it to do. If we could resolve that technical problem, that’d be great. Then we dodge this difficulty of humans maybe passing off control to some systems that don’t want the same things we want.Then, zooming in a little bit more, if the whole world … Right, so this is a problem which some people care about, we also care about a lot of other things though, and we’re also all competing with one another which introduces a lot of pressure for us to build whatever kind of AI works best. So there’s some sort of fundamental tension between building AI that works best for the tasks that we want our AI to achieve, and building AI which robustly shares our values, or is trying to do the same things that we want it to do.So it seems like the current situation is we don’t know how to build AI that is maximally effective but still robustly beneficial. If we don’t understand that, then people deploying AI will face some trade-off between those two goals. I think by default, competitive pressures would cause people to push far towards the AI that’s really effective at doing what we want it … Like, really effective at acquiring influence or navigating conflict, or so on, but not necessarily robustly beneficial. So then we would need to either somehow coordinate to overcome that pressure. So we’d have to all agree we’re going to build AI that actually does what we want it to do, rather than building AI which is effective in conflict, say. Or, we need to make technical progress so there’s not that trade-off."} {"prompt": "So to what extent do you view the arms race dynamic, the fact that people might try to develop AI prematurely because they’re in a competitive situation, as the key problem that’s driving the lack of safety?", "completion": "So I think the competitive pressure to develop AI, in some sense, is the only reason there’s a problem. I think describing it as an arms race feels somewhat narrow, potentially. That is, the problem’s not restricted to conflicted among states, say. It’s not restricted even to conflict, per se. If we have really secure property, so if everyone owns some stuff and the stuff they owned was just theirs, then it would be very easy to ignore … if individuals could just opt out of AI risk being a thing because they’d just say, “Great, I have some land and some resources and space, I’m just going to chill. I’m going to take things really slow and careful and understand.” Given that’s not the case, then in addition to violent conflict, there’s … just faster technological progress tends to give you a larger share of the stuff.Most resources are just sitting around unclaimed, so if you go faster you get more of them, where if there’s two countries and one of them is 10 years ahead in technology, that country will, everyone expects, expand first to space and over the very long run, claim more resources in space. In addition to violent conflict, de facto, they’ll claim more resources on earth, et cetera.I think the problem comes from the fact that you can’t take it slow because other people aren’t taking it slow. That is, we’re all forced to develop technology fast as we could. I don’t think of it as restricted to arms races or conflict among states, I think there would probably still be some problem, just because people … Even if people weren’t forced to go quickly, I think everyone wants to go quickly in the current world. That is, most people care a lot about having nicer things next year and so even if there were no competitive dynamic, I think that many people would be deploying AI the first time it was practical, to become much richer, or advance technology more rapidly. So I think we would still have some problem. Maybe it would be a third as large or something like that."} {"prompt": "How much attention are people paying to these kind of problems now? My perception is that the amount of interest has ramped up a huge amount, but of course, I guess the amount the number of resources going into just increasing the capabilities of AI has also been increasing a lot, so it’s unclear whether safety has become a larger fraction of the whole.", "completion": "So I think in terms of profile of the issue, how much discussion there is of the problem, safety has scaled up faster than AI, broadly. So it’s a larger fraction of discussion now. I think that more discussion of the issue doesn’t necessarily translate to anything super productive. It definitely translates to people in machine learning maybe being a little bit annoyed about it. So it’s a lot of discussion, discussion’s scaled up a lot. The number of people doing research has also scaled up significantly, but I think that’s maybe more in line with the rate of progress in the field. I’m not sure if fraction of people working on, “I’m full time …” Actually, no I think that’s also scaled up, maybe by a factor of two relatively, or something.So if one were to look at publications and taught machine learning conferences, there’s an increasing number, maybe a few in the last NIPS, that are very specifically directed at the problem, “We want our AI to be doing the thing that we want it to be doing and we don’t have a way to do that right now. Let’s try and push technology in that direction. To build AI to understand what we want and help us get it.” So now we’re at the point where there’s a few papers in each conference that are very explicitly targeted at that goal, up from zero to one.At the same time, there’s aspects of the alignment problem that are more clear, so things like building AI that’s able to reason about what humans want, and there’s aspects that are maybe a little bit less clear, like more arcane seeming. So for example, thinking about issues distinctive to AI which exceeds human capabilities in some respect. I think the more arcane issues are also starting to go from basically nothing to discussed a little bit."} {"prompt": "What kind of arcane issues are you thinking off?", "completion": "So there’s some problem with building weak AIs, say, that want to do what humans want them to do. There’s then a bunch of additional difficulties that appear when you imagine the AI that you’re training is a lot smarter than you are in some respect. So then you need some other strategy. So in that regime, it becomes … When you have a weak AI, it’s very easy to say what the goal is, what you want the AI to do. You want it do something that looks good to you. If you have a very strong AI, then you actually have a philosophical difficulty of what is the right behavior for such a system. It means that all the answers … there can be no very straightforward technical answer if we prove a theorem and say this is the right … or you can’t nearly prove a theorem. You have to do some work to say, we’re happy with what this AI is doing, even though, no human understands, say, what this AI’s doing.Same parallel with device specification stuff. Another big part of alignment is understanding training models that continue to do … you train your model to do something. On the training distribution, you’ve trained your AI, on the training distribution, it does what you want. There’s a further problem of maybe when you deploy it, or on the test distribution it does something catastrophically different from what you want, and that’s also … on that problem, I think interest has probably scaled up even more rapidly. So the number of people thinking about adversarial machine learning, can an adversary find some situation in which your AI does something very bad, then people working on that problem has scaled up. I think it’s more than doubled as a fraction of the field, although it’s still in absolute terms, kind of small."} {"prompt": "What do you think would cause people to seriously scale up their work on this topic and do you think it’s likely to come in time to solve the problem, if you’re right that there are serious risks here?", "completion": "Yeah, so I think that where we’re currently at, it seems clear that there is a real problem. There is this technical difficulty of building AI that does what we want it to do. It’s not yet clear if that problem is super hard, so I think we’re really uncertain about that. I’m working on it, not because I’m confident it’s super hard, but because it seems pretty plausible that it’s hard. I think that the machine learning community would be much, much motivated to work on the problem if it became clear that this was going to be a serious problem. If you aren’t super good at coping with, “Well there’s a 30% chance this is going to be a huge problem,” or something like that. I think one big thing is as it becomes more clear, then I think many more people will work on the problem.So when I talk about these issues of training weaker AI systems to do what humans want them to do, I think it is becoming more clear that that’s a big problem. So for example, we’re getting to the point where robotics is getting good enough that it’s going to be limited by, or starting to be limited by, who communicates to the robot what it actually ought to be doing. Or people are becoming very familiar with … YouTube has an algorithm that decides what video it will show you. People have some intuitive understanding, they’re like, “That algorithm has a goal and if that goal is not the goal that we collectively has school and the users of YouTube would want, that’s going to push the world in this annoying direction.” It’s going to push the world towards people spending a bunch of time on YouTube rather than their lives being better.So you think, we are currently at the stage where some aspects of these problems are becoming more obvious, and that makes it a lot easier for people to work on those aspects. As we get closer to AI, assuming that these problems are serious, it’s going to become more and more obvious that the problems are serious. That is, we’ll be building AI systems which, humans don’t understand what they do, and the fact that their values are not quite right is causing serious problems.I think that’s one axis and then the other axis is … So, I’m particularly interested in the possibility of transformative AI that has a very large effect on the world. So the AI that starts replacing humans in the great majority of economically useful work. I think that right now, we’re very uncertain about what the timelines are for that. I think there’s a reasonable chance within 20 years, say, but certainly there’s not compelling evidence that it’s going to be within 20 years. I think as that becomes more obvious, then many more people will start thinking about catastrophic risks in particular, because those will become more plausible."} {"prompt": "So your concerns about how transformative AI could go badly have become pretty mainstream but not everyone is convinced. How compelling do you think the arguments are that people should be worried about this and is there anything that you think that you’d like to say to try to persuade skeptics who might be listening?", "completion": "I think almost everyone is convinced that there is … or almost everyone in machine learning, is convinced that there’s a problem. That there’s an alignment problem. There’s the problem of trying to build AI to do what you want it to do and that that requires some amount of work. I think the point of disagreement … there’s few points of disagreement within the machine learning community. So one is, is that problem hard enough that it’s a problem that’s worth trying to focus on and trying to push differentially? Or is that the kind of problem that should get solved in the normal business of doing AI research? So that’s one point of disagreement. I think on that point, I think in order to be really excited about working that problem, you have to be thinking, what can we do to affect how AI goes better?If you’re just asking how can we have really powerful AI that does good things as soon as possible, then I think it’s actually not that compliant an argument to work on alignment. But I think if you’re asking the question how do we actually maximize so probably this goes well, then it doesn’t really matter whether that ought to be part of the job of AI researchers, we should be really excited about putting more resources into that to make it go faster and I think if someone really takes seriously the goal of trying to make AI go well instead of just trying to push on AI and trying to make cool stuff happen sooner, or trying to realize benefits over the next five years, then I think that case is pretty strong right now.Another place there’s a lot of disagreement in the ML community is, maybe it’s more an issue of framing than an issue of substance, which is the kind of thing I find pretty annoying. There’s one frame where you’re like, “AI’s very likely to kill everyone, there’s going to be some robot uprising. It’s going to be a huge mess, this should be on top of our list of problems.” And there’s another framing where it’s like, “Well, if we, as the AI community, fail to do our jobs, then yes something bad would happen.” But it’s kind of offensive for you to say that we as the AI community are going to fail to do our jobs. I don’t know if I would really need to … it doesn’t seem like you should really have to convince anyone on the second issue.You should be able to be like, “Yes, it’d be really bad if we failed to do our jobs.” Now, this discussion we’re currently having is not part of us trying to argue that everyone should be freaking out, this is us trying to argue like … this is us doing our jobs. This discussion we’re having right now. You can’t have a discussion about us trying to do our jobs and be like, “Yes, it’s going to be fine because we’re going to do our jobs.” That is an appropriate response in some kinds of discussion, maybe …"} {"prompt": "But when you’re having the conversation about are we going to spend some money of this now, then …", "completion": "Yeah, then I think it’s not such a great response. I think safety’s a really unfortunate word. Lot’s of people don’t like safety, it’s kind of hard to move away from. If you describe the problem, like with training AI to do what we want it to do to people, they’re like, “Why do you call that safety?” That’s the problem with building good AI, and that’s fine, I’m happy with that. I’m happy saying, “Yep, this is just doing AI reasonably well.” But then, yeah, it’s not really an argument about why one shouldn’t push more money into that area, or shouldn’t push more effort into that area. It’s a part of AI that’s particularly important to whether AI’s a positive or negative effect.Yeah, I think in my experience, those are the two biggest disagreements. The biggest substantive disagreement is on the, “Is this a thing that’s going to get done easily anyway?” I think there people tend to have … maybe it’s just a normal level of over-confidence about how easy problems will end up being, together with not having a real … I think there aren’t that many people who are really prioritizing the question, “How do you make AI go well?” Instead of just, “How do make …” Like, choose some cool thing they want to happen. “How do I make that cool thing happen as soon as possible in calendar time?” I think that’s unfortunate, it’s a hard thing to convince people on, in part because values discussions are always a little bit hard."} {"prompt": "So what do you think are the best arguments against being concerned about this issue, or at least, wanting to prioritize directing resources towards it, and why doesn’t it persuade you?", "completion": "So I think there’s a few classes of arguments. Probably the ones I find most compelling are opportunity cost arguments where someone says, “Here’s a concrete alternative. Yeah, you’re concerned about x, have you considered that y’s even more concerning?” I can imagine someone saying, “Look, the risk of bioterrorism killing everyone is high enough that you should … on the margin, returns to that are higher than returns to AI safety.” At least they’re not compelled by those arguments as well, part of that is competitive advantage thing where like, “I don’t really have to evaluate those arguments because it’s clear what my competitive advantage is.” In part, I have a different reason, I’m not compelled for every argument of that form. So that’s one class of arguments against.In terms of the actual value of working on AI safety, I think the biggest concern is this, “Is this an easy problem that will get solved anyway?” Maybe the second biggest concern is, “Is this a problem that’s so difficult that one shouldn’t bother working on it or one should be assuming that we need some other approach?” You could imagine, the technical problem is hard enough that almost all the bang is going to come from policy solutions rather than from technical solutions.And you could imagine, those two concerns maybe sound contradictory, but aren’t necessarily contradictory, because you could say, “We have some uncertainty about this parameter of how hard this problem is.” Either it’s going to be easy enough that it’s solved anyway, or it’s going to be hard enough that working on it now isn’t going to help that much and so what mostly matters is getting our policy response in order. I think I don’t find that compelling, in part because one, I think the significant probability on the range … like the place in between those, and two, I just think working on this problem earlier will tell us what’s going on. If we’re in the world where you need a really drastic policy response to cope with this problem, then you want to know that as soon as possible.It’s not a good move to be like, “We’re not going to work on this problem because if it’s serious, we’re going to have a dramatic policy response.” Because you want to work on it earlier, discover that it seems really hard and then have significantly more motivation for trying the kind of coordination you’d need to get around it."} {"prompt": "It seems to me like it’s just too soon to say whether it’s very easy, moderately difficult or very difficult, does that seem right?", "completion": "That’s definitely my take. So I think people make some arguments in both directions and we could talk about particular arguments people make. Overall, I find them all just pretty unconvincing. I think a lot of the like, “It seems easy,” comes from just the intuitive, “Look, we get to build the AI, we get to choose the training process. We get to look at all the competition AI is doing as it thinks. How hard can it be to get the AI to be trying to do …” or maybe not, maybe it’s hard to get it to do exactly what you want but how hard can it be to get it to not try and kill everyone?That sounds like a pretty … there’s a pretty big gap between the behavior we want and the behavior reasoning about what output is going to most lead to humans being crushed. That’s a pretty big gap. Feels like you ought to be able to distinguish those, but I think that’s not … There’s something to that kind of intuition. It is relevant to have a reasoning about how hard a problem is but it doesn’t carry that much weight on it’s own. You really have to get into the actual details of how we’re producing AI systems, how is that likely to work? What is the distribution of possible outcomes in order to actually sustain anything with confidence? I think once you do that, the picture doesn’t look quite as rosy."} {"prompt": "You mentioned that one of the most potentially compelling counter arguments was that there’s just other really important things for people to be doing that might be even more pressing. Yeah, what things other than AI safety do you think are among the most important things for people to be working on?", "completion": "So I guess I have two kinds of answers to this question. One kind of answer is what’s the standard list of things people would give? Which I think are the most likely things to be good alternatives. So for example, amongst the utilitarian crowd, I think the talking about an essential risk from engineered pandemics is a very salient option, there’s a somewhat broader bioterror category. I think off other things in this genre, one could also look at the world more broadly, so intervening on political process, improve political institutions, or just push governance in a particular direction that we think is conducive to a good world, or a world on a good longterm trajectory.Those are examples of problems that lots of people would advocate for and therefore, I think if lots of people think x is important, that’s good evidence that x is important. The second kind of answer, which is the problems that I find most tempting to work, which is going to be related to … it’s going to tend to be systematically be things that other people don’t care about, I also think there’s a lot of value. Yeah, one can add a lot of value if there’s a thing that’s important, if you care about the ratio of how important it actually is. Or how important other people think it is and how important it actually is.So at that level, things that I’m like … I’m particularly excited about very weird utilitarian arguments. So I’m particularly excited about people doing more thinking about what actual features of the world affect, whether on a positive or negative trajectory. So thinking about things … There’s a lot of considerations that are extremely important, from the long run utilitarian perspective, that are just not very important according to people’s normal view of the world, or normal values. So you find one big area is just thinking about and acting on, sort of that space of considerations.So an example, which is a kind of weird example, but hopefully illustrates the point, is normal people care a ton about whether humanity … they care a ton about catastrophic risks. They would really care if everyone died. I think to a weird utilitarian, you’re like, “Well, it’d be bad if everyone died, but even in that scenario, there was a bunch of weird stuff you would do to try and improve the probability that things turn out okay in the end.” So these include things like working on extremely robust bunkers that are capable of repopulating the world, or trying to … in the extreme case where all humans die, you’re like, “Well we’d like some other animal later to come along and if all intelligent life began and colonize the stars.” Those are weird scenarios, the scenarios that basically no one tries to push on … No one is asking, “What could we do as a civilization to make it better for the people who will come after us if we manage to blow ourselves up?”So because no one is working on them, even though they’re not that important in absolute terms, I think it’s reasonably likely that they’re good things to work on. Those are examples of kind of weird things. There’s a bunch of not as weird things that also seem pretty exciting to me. Especially things about improving how well people are able to think, or improving how well institutions function, which I’d be happy to get into more detail on, but are not things I’m expert in."} {"prompt": "Yeah, maybe just want to list off a couple of those?", "completion": "So just all the areas that seem … are high level areas that seem good to me, so a list of … Thinking about the utilitarian picture and what’s important to our future focused utilitarian, there’s thinking about extinction risks. Maybe extinction risks that are especially interesting to people who care about extinction. So things like bunkers, things like repopulation of the future, things like understanding the tails of normal risks. So understanding the tails of climate change, understanding the tails of nuclear war.More normal interventions like pushing on peace, but especially with an eye to avoiding the most extreme forms of war, or mitigating the severity of all out war. Pushing on institutional equality, so experimenting with institutions like prediction markets, different ways of aggregating information, or making decisions across people. Just running tons of experiments and understanding what factors influence individual cognitive performance, or individual performance within organizations, or for decision making.An example of a thing that I’m kind of shocked by is how little study there is of nootropics and cognitive enhancement broadly. I think that’s a kind of thing that’s relatively cheap and seems such good bang for your buck and expectation, that it’s pretty damning for civilization that we haven’t invested in it. Yeah, those are a few examples."} {"prompt": "Okay, great. Coming back to AI, how important is it to make sure that the best AI safety team ends up existing within the organization, that has the best general machine learning firepower behind it?", "completion": "So you could imagine splitting up the functions of people who work on AI safety into two categories. One category is developing technical understanding, which is sufficient to build aligned AI. So this is doing research saying, “Here are some algorithms, here’s some analysis that seems important.” Then a second function is actually affecting the way that an AI project is carried out, to make sure it reflects our understanding of how to build an aligned AI. So for the first function, it’s not super important. For the first function, if you want to be doing research on alignment, you want to have access to machine learning expertise, so you need to be somewhere that’s doing reasonably good machine learning research but it’s not that important that you be at the place that’s actually at the literal cutting edge.From the perspective of the second function, it’s quite important. So if you imagine someone actually building very, very powerful AI systems, I think the only way in practice that society’s expertise about how to build aligned AI is going to affect the way that we build AGI, is by having a bunch of people who have made it their career to understand those considerations and work on those considerations, who are involved in the process of creating AGI. So for that second function it’s quite important that if you want an AI to be safe, you want people involved in development of that AI to basically be alignment researchers."} {"prompt": "Do you think we’re heading towards a world where we have the right distribution of people?", "completion": "Yeah so I think things are currently okay on that front. I think as we get closer … so we’re currently in a mode where we can imagine … we’re somewhat confident there will be powerful AI systems within two or three years and so for the short term, there’s not as much pressure as there will be closer to the day to consolidate behind projects that are posing a catastrophic risk. It would optimistic that if we were in that situation where we actually faced significant prospect of existential risk from AI over the next two years, then there would be significantly more pressure for … both pressure for safety researchers to really follow wherever that AI was being built or be allocated across the organizations that are working on AI that poses an existential risk, and also a lot of pressure within such organizations to be actively seeking safety researchers.My hope would be that you don’t have to really pick. Like the safety researchers don’t have to pick a long time in advance what organizations you think will be doing that development, you can say, “We’re going to try and develop the understanding that is needed to make this AI safe. We’re going to work in an organization that is amongst those that might be doing development of dangerous AI and then we’re going to try and live in the kind of world where as we get very close, there’s a lot of … people understand the need for and are motivated to concentrate more expertise on alignment and safety,” and that that occurs at that time."} {"prompt": "It seems like there’s some risks to creating new organizations because you get a splintering of the effort and also potential coordination problems between the different groups. How do you feel we should split additional resources between just expanding existing research organizations versus creating new projects?", "completion": "So I agree that to the extent that we have a coordination problem amongst developers of AI, to the extent that the field is hard to reach agreements or regulate, as there are more and more actors, then almost equally prefer not to have a bunch of new actors. I think that’s mostly the case for people doing AI development, so for example, for projects that are doing alignment per se, I don’t think it’s a huge deal and should mostly be determined by other considerations, whether to contribute to existing efforts or create new efforts.I think in the context of AI projects, I think almost equal, one should only be creating new AI … if you’re interested in alignment, you should only be creating new AI projects where you have some very significant interest in doing so. It’s not a huge deal, but it’s nicer to have a smaller number of more pro-social actors than to have a larger number of actors with uncertain … or even a similar distribution of motivations."} {"prompt": "So how much of the variance in outcomes from artificial general intelligence, in your estimates, comes from uncertainty about how good we’ll be at actually working on the technical AI alignment problem, versus uncertainty about how firms that are working to develop AGI will behave potentially, the governments in the countries where they’re operating, how they’re going to behave?", "completion": "Yeah, I think the largest source of variance isn’t either of those but is instead just how hard is problem? What is the character of the problem? So after that, I think the biggest uncertainty, though not necessarily the highest place to push, is about how people behave. It’s how much investment do they make? How well are they able to reach agreements? How motivated are they in general to change what they’re doing in order to make things go well? So I think that’s a larger source of variance than technical research that we do in advance. I think it’s potentially a harder thing to push on in advance. Pushing on how much technical research we do in advance is very easy. If we want to increase that amount by 10%, that’s incredibly cheap, whereas having a similarly big change on how people behave would be a kind of epic project. But I think that more of the variance comes from how people behave.I’m very, very, uncertain about the institutional context in which that will be developed. Very uncertain about how much each particular actor really cares about these issues, or when push came to shove, how far out of their way they would go to avoid catastrophic risk. I’m very uncertain about how feasible it will be to make agreements to avoid race to the bottom on safety."} {"prompt": "Another question that came in from a listener was, I guess a bit of a hypothetical, but it’s interesting to prod your intuitions here. What do you think would happen if several different firms or countries simultaneously made a very powerful general AI? Some of which were aligned but some of which weren’t and potentially went rogue with their own agenda. Do you think that would be a very bad expectation, situation expectation?", "completion": "My normal model does not involve a moment where you’re building powerful AI. So that is, instead of having a transition from nothing to very powerful AI, you have a bunch of actors gradually rushing up the capacity of the systems they’re able to build. But even if that’s false, I expect developers to generally be really well financed groups that are quite large. So if they’re smaller groups, I do generally expect them to divide up the task and effectively pool resources in one way or another. Either by explicitly resource sharing or by merging or by normal trading with each other. But we can still imagine … I say, in general, this was distributed across the world, it would be a bunch of powerful AI systems, some of which are aligned, some of which aren’t aligned. I think my default guess about what happens in that world is similar to saying if 10% of the AIs are aligned, then we capture 10% as much value as if 100% of them are aligned. It’s roughly in that ballpark."} {"prompt": "Does that come from the fact that there’s a 10% chance that one out of 10 AGIs would, in general, take over? You have more of a view where there’s going to be a power sharing, or each group gets a fraction of the influence, as in the world today?", "completion": "Yeah. I don’t have a super strong view on this, and in part, I don’t have a strong view because I end up at the same place, regardless of how much stochasticity there is. Like whether you get 10% of the stuff all time, or all the stuff 10% of the time, I don’t have an incredibly strong preference between those, for kind of complicated reasons. I think I would guess … so, in general, if there’s two actors who are equally powerful, they could fight it out and then just see what happened and then behind a veil of ignorance, each of them wins half the time and crushes the other.I think normally, people would prefer to reach comprises short of that. So that is, imagine how that conflict would go and say, “Well if you’re someone who would be more likely to win, then you’ll extract a bunch of concessions from the weaker party.’ But everyone is incentivized to reach an agreement where they don’t have an all out war. In general, that’s how things normally go amongst humans. We’re able to avoid all out war most of the time, though not all the time.I would, in general, guess that AI systems will be better at that. Certainly in the long run, I think it’s pretty clear AI systems will be better at negotiating to reach positive sum trades, where avoiding war is often a example of a positive sum trade. It’s conceivable in the short term that you have AI systems that are very good at some kinds of tasks and not very good at diplomacy, or not very good at reaching agreement or these kinds of tests. But I don’t have a super strong view about that.I think that’s the kind of thing that would determine to what extent you should predict there to be war. If people have transferred most of the decision making authority to machines, or a lot of decision making authority to machines, then you care a lot about things like, are machines really good at waging war but not really changing the process of diplomacy? If they have differential responsibility in that kind of respect, then you get an outcome that’s more random and someone will crush everyone else, and if you’re better at striking agreements, then you’re more likely to say like, “Well, look, here’s the allocation of resources … we’ll allocate influence according to the results of what would happen if we fought. Then let’s all not fight.”"} {"prompt": "One topic that you’ve written quite a lot about is credible commitments and the need for organizations to be honest. I guess part of that is because it seems like it’s going to be very important in the future for organizations that are involved in the development of AGI to be able to coordinate around safety and alignment and to avoid getting into races with one another. Or to have a just a general environment of mistrust, where they have reasons to go faster in order to out compete other groups. Has anyone ever attempted to have organizations that are as credible in their commitments as this? Do you have much hope that we’ll be able to do that?", "completion": "So certainly I think in the context of arms control agreements and monitoring, some efforts are made for one organization to be able to credibly commit that they are … credibly demonstrate that they’re abiding by some agreement. I think that the kind of thing I talked about … So I wrote this blog post on honest organizations, I think the kind of measure I’m discussing there is both somewhat more extreme than things that would … like a government would normally be open to and also more tailored for this setting, where you have an organization which is currently not under the spotlight, which is trying to set itself up in such a way that it’s prepared to be trustworthy in the future, if it is under the spotlight.I’m not aware of any organizations having tried that kind of thing. So a private organization saying, “Well, we expect some day in the future, we might want to coordinate in this way and be regulated in this way so we’re going to try and constitute ourselves such that it’s very easy for someone to verify that we’re complying with an agreement or a law.” I’m not aware of people really having tried that much. I think there’s some things that are implicitly this way and companies can change who they hire, they can try and be more trustworthy by having executives, or having people on the board, or having monitors embedded within the organization that they think stakeholders will trust. Certainly a lot of precedent for that. Yeah, I think the reason you gave for why this seems important to me in this context is basically right.I’m concerned about the setting where there’s some trade-off between the capability of the AI systems you build and safety. In the context of such a trade-off, you’re reasonably likely to want some agreement that says, “Everyone is going to meet this bar on safety.” Given that everyone has committed to meet that bar, there’s not really an incentive then to cut … or they’re not able to follow the incentive to cut corners on safety, say. So you might want to make that …. That agreement might take place as an informal agreement amongst AI developers, it might take place as domestic regulation or law enforcement would like to allow AI companies to continue operating, but would like to verify they’re not going to take over the world.It might take the context of agreements among states, which would themselves be largely … An agreement among states about AI would involve the US or China having some unusually high degree of trust or insight into what firms in the other country are doing. So I’m thinking forward to that kind of agreement and seems like you would need machinery in place that’s not currently in place. Or it would be very, very hard at the moment. So anything you could do to make it easier seems like it would be … potentially you could make it quite a lot easier. There’s a lot of room there."} {"prompt": "Is this in itself a good reason for anyone who’s involved in AI research to maintain an extremely high level of integrity so that they will be trusted in future?", "completion": "I think having a very high level of integrity sounds good in general. As a utilitarian, I do like it if the people engaged in important projects are mostly in it for their stated goals and want to make the world better. It seems like there’s a somewhat different thing which is how trustworthy are you to the external stakeholders who wouldn’t otherwise have trusted your organization. Which I think is different from the normal … if we were to rate people by integrity, that would be a quite different ranking than ranking them by demonstrable integrity to people very far away who don’t necessarily trust the rest of the organization they’re involved in."} {"prompt": "I didn’t quite get that. Can you explain that?", "completion": "So I could say there’s both … If I’m interacting with someone in the context … like I’m interacting with a colleague. I have some sense of how much they conduct themselves with integrity. It’s like, one, I could rank people by that. I’d love it if the people who were actually involved in making AI were people who I’d rank as super high integrity.Because then a different question, which is suppose you have some firm, and then you have, there’s someone in the Chinese defense establishment reasoning about the conduct of that firm. They don’t really care that much probably, if there’s someone I would judge as high integrity involved in the process because they don’t have the information that I’m using to make that judgment. From their perspective, they care a lot of about the firm being instructed such that they feel that they understand what the firm is doing. They don’t feel any uncertainty about whether, in particular, they have minimal suspicion that a formal agreement is just cover for US firms to be cutting corners and delaying their competitors. They really want to have a lot of insight into what is happening at the firm. They don’t have some confidence that there’s not some unobserved collusion between the US defense establishment and this firm that nominally is complying with some international agreement, to undermine that agreement. That’s the example of states looking into firms.But also in the example of firms looking into firms, similarly, if I am looking in, there’s some notion of integrity that would be relevant for two researchers at Baidu looking, interacting with each other and thinking about how much integrity they have. Something quite different that would be helpful for me looking into AI research at Baidu actually believing that AI research at Baidu is being conducted, when they make public statements, those statements are an accurate reflection of what they’re doing. They aren’t collaborating. There isn’t behind the scenes a bunch of work to undermine nominal agreements."} {"prompt": "Yeah, I think that it is very valuable for people in this industry to be trustworthy for all of these reason, but I guess I am a bit skeptical that trust alone is going to be enough, in part for the reasons you just gave. There’s that famous Russian proverb, trust but verify. It seems like there’s been a lot of talk, at least publicly, about the importance of trust, and maybe not enough about how we can come up with better ways of verifying what people’s behavior actually is. I mean, one option, I guess, would just be to have people from different organizations all working together in the same building, or to move them together so they can see what other groups are doing, which allows them to have a lot more trust just because they have much more visibility. How do you feel about that?", "completion": "Yeah, so I think I would be pretty pessimistic about reaching any kind of substantive and serious agreement based only on trust for the other actors in the space. It may be possible in some … yeah, it’s conceivable amongst Western firms that are already quite closely, where there’s been a bunch of turnover of staff from one to the other and everyone knows everyone. It’s maybe be conceivable in that case. In general, when I talk about agreements, I’m imagining trust as a complement to fairly involved monitoring and enforcement mechanisms.The modern enforcement problem in this context is quite difficult. That is it’s very, very hard for me to know, suppose I’ve reached, firm A and firm B have reached some nominal agreement. They’re only going to develop some AI that’s safe according to some standard. It’s very, very hard for firm A to demonstrate that to firm B without literally showing all of their, without giving firm B enough information they could basically take everything or benefit from all of the research that firm A is doing. There’s no easy solution to this problem. The problem is easier to the extent you believe that the firm is not running a completely fraudulent operation to maintain some appearances, but then in addition to have some … In addition to having enough insight to verify that, you still need to do a whole bunch of work to actually control how development is going.I’m just running a bunch of code on some giant computing cluster, you can look and you can see, indeed, they’re running some code on this cluster. Even if I literally showed you all of the code I was running on the cluster, that’s actually that, wouldn’t be that helpful. It’s very hard for you to trust what I’m doing unless you’re literally have watched the entire process by which the code was produced. Or at least, you’re confident there wasn’t some other process hidden away that’s writing the real code, and the thing you can see is just a cover by which it looks like we’re running some scheduling job, but actually it’s just a … it’s carrying some real payload that’s a bunch of actual AI research that the results are getting smuggled out to the real AI research group."} {"prompt": "Could you have an agreement in which every organization accepts that all of the other groups are going to try to put clandestine informants inside their organization, and that that’s just an acceptable thing for everyone to do to one another because it’s the only way that you could really believe what someone’s telling you?", "completion": "Yes, I think there’s a split between two ways of doing this kind of coordination. On one arm, you try and maintain something like the status quo, where you have a bunch of people independently pushing on the AI progress. In order to maintain that arm, there’s some limit on how much transparency different developers can have into each other’s research. That’s one arm. Then there’s a second arm where you just give up on that and you say yes, all of the information is going to leak.I think the difficulty in the first arm is that it’s incredibly, you have to walk this really fine line where you’re trying to give people enough insight, which probably does involve monitors, whistle blowing, other mechanisms whereby there are people who firm A trust embedded in firm B. That’s what makes it hard to do monitoring without leaking all the information. That you have to walk that fine line. Then, if you want to leak all the information, then the main difficulty seems to be you have to reach some new agreement about how you’re actually going to divide the fruits AI research.Right now, there’s some implicit status quo, where people who make more AI progress expect to capture some benefits by virtue of having made more AI progress. You could say, no, we’re going to deviate from the status quo and just agree that we’re going to develop AI effectively jointly. Either because it’s literally joint or because we’ve all opened … or the leaders has opened himself up to enough monitoring they cease to be the leader. If you do that, then you have to reach some agreement where you say, here’s how we compensate the leader for the fact that they were the leader. Either that or the leader has to be willing to say, yep, I used to be, have a high evaluation because I was doing so well in AI, and now I’m just happy to grant that that advantage is going to get eroded, and I’m happy to do that because it reduces the risk of the world being destroyed.I think both of those seem like reasonable options to me. Which one that you take depends a little bit upon how serious the problem appear to be, like what the actual structure of the field is like, or like the coordinating is more reasonable if the relevant actors are close, such that … well, it’s more reasonable if there’s an obvious leader who’s going to capture the benefits and is feeling reasonably is wiling to distribute them, or is somehow there’s not a big difference between the players, such as erasing AI as a fact. If you imagine the US and China both believing that, like things are hard if each of them believes that they’re ahead in AI and each of them believe that they’re going to benefit by having AI research which isn’t available to their competitor. Things are hard if both of them believe that they’re ahead, and things are easy if both of them believe that they’re behind.If they both have an accurate appraisal of the situation and understand there’s not a big difference, then maybe you’re also okay because everyone’s fine saying, sure, I’m fine leaking because I know that that’s roughly the same as … I’m not going to lose a whole lot by leaking information to you."} {"prompt": "Okay. Let’s turn now to this question of fast versus slow take off of artificial intelligence. Historically, a lot of people who’ve been worried about AI alignment have tended to take the view that they expected progress to be relatively gradual for a while, and then to suddenly accelerate and take off very quickly over a period of days or weeks or months rather than years. But you’ve, for some time, been promoting the view that you think the take off of general AI is going to be more gradual than that. Do you want to just explain your general view?", "completion": "Yeah, so it’s worth clarifying that when I say slow, I think I still mean very fast compared to most people’s expectations. I think that a transition taking place over a few years, maybe two years between AI having very significant economic impact and literally doing everything sounds pretty plausible. I think when people think about such a tiered transition, to most people on the street, that sounds like a pretty fast takeoff. I think that’s important to clarify. That when I say slow, I don’t mean what most people of by slow.Another things that’s important to clarify is that I think there’s rough agreement amongst the alignment and safety crowd about what would happen if we did human level AI. That is everyone agrees that at that point, progress has probably exploded and is occurring very quickly, and the main disagreement is about what happens in advance of that. I think I have the view that in advance of that, the world has already changed very substantially. You’re already likely exposed to catastrophic AI risk, and in particular, when someone develops human level AI, it’s not going to emerge in a world like the world of today where we can say that indeed, having human level AI today would give you a decisive strategic advantage. Instead, it will emerge in a world which is already much, much crazier than the world of today, where having a human AI gives you some more modest advantage."} {"prompt": "Yeah, do you want to paint a picture for us of what that world might look like?", "completion": "Yeah, so I guess there are a bunch of different parts of the worlds, and I can focus on different ones, but I can try and give some random facts or some random view, like facts from that world. They’re not real facts. They’re Paul’s wild speculations. I guess, in terms of calibrating what AI progress looks like, or how rapid it is, I think maybe two things that seem reasonable to think about are, the current rate of progress and information technology in general. That would suggest something like, maybe in the case of AI, like falling in costs by a factor of two every year-ish or every six to 12 months.Another thing that I think is important to get an intuitive sense of scale is to compare to intelligence in nature. I think when people do intuitive extrapolation of AI, they often think about abilities within the human range. One thing that I do agree with proponents of fast takeoff about is that that’s not a very accurate perspective when thinking about AI.I think about better way to compare is to look at what evolution was able to do with varying amounts of compute. If you look at what each order of magnitude buys you in nature, you’re going from insects to small fish to lizards to rats to crows to primates to humans. Each of those is one order of magnitude, roughly, so you should be thinking of there are these jumps. It is the case that the different between insect and lizard feels a lot smaller to us and is less intuitive significance than the difference between primate and human or crow and primate, so when I’m thinking about AI capabilities, I’m imagining, intuitively, and this is not that accurate, but I think is useful as an example to ground things out, I’m imagining this line raising and one day you have, or one year you have an AI which is capable of very simple learning tasks and motor control, and then a few years later … A year later, you have an AI that’s capable of slightly more sophisticated learning, now it learns as well as a crow or something, that AI is starting to get deployed as quickly as possible in the world and having a transformative impact, and then it’s a year later that AI has taken over the process of doing science from humans. Yeah, I think that’s important to have in mind as background for talking about what this world looks like."} {"prompt": "What tasks can you put an AI that’s as smart as a crow on that are economically valuable?", "completion": "I think there’s a few kinds of answers. Once place where I think you definitely have a big impact is in robotics and domains like manufacturing logistics and construction. That is think lower animals are probably, they’re good enough at motor control that you’d have much, much better robotics than you have now. Today, I would say robotics doesn’t really, or robots that learn don’t really work very well or at all. Today the way we get robotics to work is you really organize your manufacturing process around them. They’re quite expensive and tricky. It’s just hard to roll it out. I think in this world, probably even before you have crow level AI, you have robots that are very general and flexible. They can be applied not only on an assembly line, but okay, one, they take the place of humans on assembly lines quite reliably, but they can also then be applied in logistics to loading and unloading truck, driving trucks, managing warehouses, construction."} {"prompt": "Maybe image identification as well?", "completion": "They could certainly do image identification well. I think that’s the sort of thing we get a little bit earlier. I think that’s a large part of … Today those activities are a large part of the economy. Maybe this stuff we just listed is something … I don’t actually know in the US, it’s probably lower here than elsewhere, but still more than 10% of our economy, less than 25%.There’s another cost of activities. If you look at the intellectual work humans do, I think a significant part of it could be done by very cheap AIs at the level of crows or not much more sophisticated than crows. There’s also a significant part that requires a lot more sophistication. I think we’re very uncertain about how hard doing science is. As an example, I think back in the day we would have said playing board games that are designed to tax human intelligence, like playing chess or go is really quite hard, and it feels to humans like they’re really able to leverage all their intelligence doing it.It turns out that playing chess from the perspective of actually designing a computation to play chess is incredibly easy, so it takes a brain very much smaller than an insect brain in order to play chess much better than a human. I think it’s pretty clear at this point that science makes better use of human brains than chess does, but it’s actually not clear how much better. It’s totally conceivable from our current perspective, I think, that an intelligence that was as smart as a crow, but was actually designed for doing science, actually designed for doing engineering, for advancing technologies rapidly as possible, it is quite conceivable that such a brain would actually outcompete humans pretty badly at those tasks.I think that’s another important thing to have in mind, and then when we talk about when stuff goes crazy, I would guess humans are an upper bound for when stuff goes crazy. That is we know that if had cheap simulated humans, that technological progress would be much, much faster than it is today. But probably stuff goes crazy somewhat before you actually get to humans. It’s not clear how many orders of magnitudes smaller a brain can be before it goes crazy. I think probably at least one seems safe, and then two or three is definitely plausible."} {"prompt": "It’s a bit surprising to say that science isn’t so hard, and that there might be a brain that, in a sense, is much less intelligent than a human that could blow us out of the water in doing science. Can you explain, can you try to make that more intuitive?", "completion": "Yeah, so I mentioned this analogy to chess, which is when humans play chess, we apply a lot of faculties that we evolved for other purposes to play chess well, and we play chess much, much better than someone using pencil and paper to mechanically play chess at the speed that a human could. We’re able to get a lot of mileage out of all of these other … I know we evolved to be really good at physical manipulation and planning in physical contexts and reasoning about social situations. That makes us, in some sense, it lets us play good chess much better than if we didn’t have all this capacities.That said, if you just write down a simple algorithm for playing chess, and you run it with a tiny, tiny fraction of the compute that a human uses in order to play chess, it crushes humans incredibly consistently. So, in a similar sense, if you imagine this project of look at some technological problem, consider a bunch of possible solutions, understand what the real obstructions are and how we can try and overcome those obstructions, a lot of the stuff we do there, we know that humans are much, much better than a simple mechanical algorithm applied to those tasks. That is we’re able to leverage all of these abilities that we … All these abilities that helped us in the evolutionary environment, we’re able to leverage to do really incredible things in terms of technological progress, or in terms of doing science or designing systems or et cetera.But what’s not clear is if you actually had created, so again, if you take the computations of the human brain, and you actually put it in a shape that’s optimal for playing chess, it plays chess many, many orders of magnitude better than a human. Similarly, if you took the computation of the human brain and you actually reorganized it, so you said now, instead of a human explicitly considering some possibilities for how to approach is problem, a computer is going to generate a billion possibilities per second for possible solutions to this problem. In many respects, we know that that computation would be much, much better than humans at resolving some parts of science and engineering.There’s been question of how, exactly how much leverage are we getting out of all this evolutionary heuristics. It’s not surprising that in the case of chess, we’re getting much less mileage than we do for tasks that are closer, that more leverage the full range of what the human brain does, or closer to tasks the human brain was designed for. I think science is, and technology are intermediate place, where they’re still really, really not close to what human brains are designed to do. It’s not that surprising if you can make brains that are really a lot better at science and technology than humans are. I think a priori, it’s not that much more surprising for science and technology than it would be for chess."} {"prompt": "Okay. I took us some part away from the core of this fast versus slow takeoff discussion. One part of your argument that I think isn’t immediately obviously is that when you’re saying that in a sense the takeoff will be slow, you’re actually saying that dumber AI will have a lot more impact on the economy and on the world than other people think? Why do you disagree with other people about what? Why do you think that earlier versions of machine learning could already be having a transformative impact?", "completion": "I think there’s a bunch of dimensions of this disagreement. An interesting fact, I think, about the effective altruism and AI safety community is that there’s a lot of agreement about, or there’s a surprising amount of agreement about takeoff being fast. There’s a really quite large diversity of view about why takeoff will be fast.Certainly the arguments people would emphasize, if you were to talk with them, would be very, very different, and so my answer to this question is different for different people. I think there’s this general, one general issue, is I think other people more imagine … other people look at the evolutionary record, and they more see this transition between lower primates and humans, where humans seem incredibly good at doing a kind of reasoning that builds on itself and discovers new things and accumulates them over time culturally. They more see that as being this jump that occurred around human intelligence and is likely to be recapitulated in AI. I think I more see that jump as occurring when it did because of the structure of evolution, so evolution was not really trying to optimize … It was not trying to optimize humans for cultural accumulation in any particularly meaningful sense. It was trying to optimize humans for this speed of tasks that primates are engaged in, and incidentally humans became very good at cultural accumulation and reasoning.I think if you optimize AI systems for reasoning, it appears much, much earlier. If evolution had been trying to make AIs that would build a civilization, or if evolution had been trying to design creatures trying to optimize for creatures that would build a civilization, instead of going straight to humans who have some level of ability at forming a technological civilization, it would have been able to produce crappier technological civilizations earlier. I now think it’s probably not the case that if you left monkeys for long enough you would get a space faring civilization, but I think that’s not for reasons that are directly, I think that’s not a consequence of monkeys just being too dumb to do it, I think it’s largely a consequence of the way that monkey’s social dynamics work. The way that imitation work amongst monkeys, the way the culture accumulation works and how often things are forgotten.I think that this continuity that we observe in the historical record between lower primates and humans, I don’t feel like it’s … It certainly provides some indication about what changes you should expect to see in the context of AI, but I don’t feel like it’s giving us a really robust indicator that it’s a really closely analogous situation. That’s one important difference. There’s this jump in the evolutionary record. I expect, to the extent there’s a similar jump, we would see it significantly earlier, and we would jump to something significantly dumber than humans. It’s a significant difference between my view and the view of some, I don’t know, maybe one third of people who are, who think takeoff is likely to be fast.There are, of course, other differences, so in general, I look at the historical record, and I think it feels to me like there’s an extremely strong regularity of the form. Before you’re able to make a really great version of something, you’re able to make a much, much worse version of something. For example, before you’re able to make a really fast computer, you’re able to make a really bad computer. Before you’re able to make a really big explosive, you’re able to make a really crappy explosive that’s unreliable and extremely expensive. Before you’re able to make a robot that’s able to do some very interesting tasks, you’re able to make a robot which is able to do the tasks with lower reliability or a greater expense or in a narrower range of cases. That seems to me like a pretty robust regularity.It seems like it’s most robust in cases where the metrics that we’re tracking is something that people are really trying to optimize. If you’re looking at a metric that people aren’t trying to optimize, like how many books are there in the world. How many books are there ein the world is a property that changes discontinuously over the historical record. I think the reason for that is just ’cause no one is trying to increase the number of books in the world. It’s incidental. There is a point in history where books are relatively inefficient way of doing something, and it switched to books being an efficient way to do something, and the number of books increases dramatically.If you look at a measure of people who are actually trying to optimize, like how quickly information is transmitted, how many facts the average person knows, it’s a … not the average person, but how many facts someone trying to learn facts knows, those metrics aren’t going to change discontinuously in the same way that how many books exist will change. I think how smart is your AI is the kind of thing that’s not going to change. That’s the kind of things people are really, really pushing on and caring a lot about, how economically valuable is your AI.I think that this historical regularity probably applies to the case of AI. There are a few plausible historical exceptions. I think the strongest one, by far, is the nuclear weapons case, but I think that that case, first, is there are a lot of very good a priori arguments for discontinuity around that case that are much, much stronger than the arguments we give for AI. Even as such, I think the extent of the discontinuity is normally overstated by people talking about the historical record. That’s a second disagreement.I think a third disagreement, is I think people make a lot of sloppy arguments or arguments that don’t quite work. I think they’re, I feel like, a little bit less uncertain because I feel like it’s just a matter of if you work through the arguments, they don’t really hold together.I think an example of that is I think people often make this argument of imagining your AI is being a human who makes mistakes sometimes, just an epsilon fraction of the time or fraction of cases where your AI can’t do what a human could do. You’re just decreasing epsilon over time until you hit some critical threshold where now your AI becomes super useful. Once it’s reliable enough, like when it gets to zero mistakes or one in a million mistakes. I think that model is like … there’s not actually, or it looks a priori like a reasonable-ish model, but then you actually think about it. Your AI is not like a human that’s degraded in some way. If you take human and you degrade them, there is a discontinuity that gets really low levels of degradation, but in fact, your AI is falling along a very different trajectory. The conclusions from that model turn out to be very specific to the way that you were thinking of AI as a degraded human. Those are the three classes of disagreements."} {"prompt": "Let’s take that it’s given that you’re right that an AI takeoff will be more gradual than some people think. Although, I guess, still very fast by human time scales. What kind of strategic implications does that have for you and me today trying to make that transition go better?", "completion": "I think the biggest strategic question that I think about regularly that’s influenced by this is to what extent early developers of AI will have a lot of leeway to do what they want with the AI that they’ve built. How much advantage will they have over the rest of the world?I think some people have a model in which early developers of AI will be at huge advantage. They can take their time or they can be very picky about how they want to deploy their AI, and nevertheless, radically reshape the world. I think that’s conceivable, but it’s much more likely that the earlier developers of AI will be developing AI in a world that already contains quite a lot of AI that’s almost as good, and they really won’t have that much breathing room. They won’t be able to reap a tremendous windfall profit. They won’t be able to be really picky about how they use their AI. You won’t be able to take your human level AI and send it out on the internet to take over every computer because this will occur in a world where all the computers that were easy to take over have already been taken over by much dumber AIs.It’s more like you’re existing in this soup of a bunch of very powerful systems. You can’t just go out into a world … people imagine something like the world of today and human level AI venturing out into that world. In that scenario, you’re able to do an incredible amount of stuff. You’re able to basically steal everyone’s stuff if you want to steal everyone’s stuff. You’re able to win a war if you want to win a war. I think that that model, so that model I think is less likely under a slow takeoff, though it still depends on quantitatively exactly how slow. It especially depends on maybe there’s some way … if a military is to develop AI in a way where they selectively … They can develop AI in a way that would increase the probability of this outcome if they’re really aiming for this outcome of having a decisive strategic advantage. If this doesn’t happen, if the person who develops AI doesn’t have this kind of leeway, then there are, I think the nature of this safety problem changes a little bit.In one respect, it gets harder because now you really want to be building an AI that can do … you’re not going to get to be picky about what tasks you’re applying your AI to. You need an AI that can be applied to any task. That’s going to be an AI that can compete with a world full of a bunch of other AIs. You can’t just say I’m going to focus on those tasks there’s a clear definition of what I’m trying to do, or I’m just going to pick a particular task, which is sufficient to obtain a strategic advantage and focus on that one. You really have to say, based on the way the world is set up, there’s a bunch of tasks that people want to apply AI to, and you need to be able to make those AI safe.In that respect, it makes the problem substantially harder. It makes the problem easier in the sense that now you do get a little bit of a learning period. It’s like as AI ramps up, people get to see a bunch of stuff going wrong. We get to roll out a bunch of systems and see how they work. So it’s not like there’s this one shot. There’s this moment where you press the button and then your AI goes, and it either destroys the world or it doesn’t. Its more there’s a whole bunch of buttons. Every day you push a new button, and if you mess up then you’re very unhappy that day, but it’s not literally the end of the world until you push the button the 60th time.It also changes the nature of the policy or coordination problem a little bit. I think that tends to make the coordination problem harder and changes your sense of exactly what that problem will look like. In particular, it’s not, it’s unlikely to be between two AI developers who are racing to build a powerful AI then takes over the world. It’s more likely there are many people developing AI, or not many, but whatever. Let’s say there are a few companies developing AI, which is then being used by a very, very large number of people, both in law enforcement and in the military and in private industry. The kind of agreement you want is a new agreement between those players.Again, the problem is easier in some sense, in that now the military significance is not as clear. It’s conceivable that that industry isn’t nationalized. That this development isn’t being done by military. That it’s instead being treated in a similar way to other strategically important industries.Then it’s harder because there’s not just this one. You don’t have to hold your breath until an AI takes over the world and everything changes. You need to actually set up some sustainable regime where people are happy with the way AI development is going. People are going to continue to think, engage in normal economic ways as they’re developing AI. In that sense, the problem gets harder. I think both problems, some aspects of the problem, both the technical and policy problems become harder, some aspects become easier."} {"prompt": "Yeah. That’s a very good answer. Given that other people would disagree with you, though, what do you think are the chances that you’re wrong about this, and what’s the counter argument that gives you the greatest concern?", "completion": "Yeah, I feel pretty uncertain about this question. I think we could try to quantify an answer to how fast this takeoff by talking about how much time elapses between certain benchmarks being met. If you have a one year lead in the development of AI, how much of an advantage does that give you at various points in development.I think that when I break out very concrete consequences in the world, like if I ask how likely is it that the person who develops AI will be able to achieve a decisive strategic advantage for some operationalization at some point, then I find myself disagreeing with other people’s probabilities, but I can’t disagree that strongly. Maybe other people will assign a 2/3 probability to that event, and I’ll assign a 1/4 probability to that even, which is a pretty big disagreement, but certainly doesn’t look like either side being confident. Let’s 2/3 versus 1/3. It doesn’t look like either side being super confident in their answer, and everyone needs to be willing to pursue policies that are robust across that uncertainty.I think the thing that makes me most sympathetic to the fast takeoff view is not any argument about qualitative change around human level. It’s more an argument just of like look quantitatively about the speed of development and think about if you were scaling up on the times scale. If every three months you were corresponding to a, your AIs were equivalent to an animal with a brain twice as large, it would not be many months between AIs that seemed minimally useful and AI that was conferring at a strategic advantage. It’s just this quantitative question of exactly how fast this development, and even there’s no qualitative change, you can have development that’s fast enough that it’s correctly described as a fast takeoff. In that case, the view I’ve described of the world is not as accurate. We’re more in that scenario where the AI developer can just keep things under wraps during these extra nine months, and then, if they’d like, have a lot of leeway about what to do."} {"prompt": "How strong do you think is the argument that people involved in AI alignment work should focus on the fast takeoff scenario even if it’s less likely because they expect to get more leverage, personally, if that scenario does come to pass?", "completion": "I think that’s a … There’s definitely a consideration that direction. I think it tends to be significantly weaker than the focusing on short time. There’s a similar argument for focusing on short timelines, which I think is quite a bit stronger. I mean, I think that … The way that argument runs, the reason you might focus on fast timelines, or on fast takeoff, is because over the course of a slow takeoff, there will be lots of opportunities to do additional work and additional experimentation to figure out what’s going on.If you have a view where that work can just replace anything you could do now, then anything you could do now becomes relatively unimportant. If you have a view where there’s any complementarity between work we do now and work that’s done. Imagine you have this, let’s say, one to two years period where people are really scrambling, where it becomes clear to many people that there’s a serious problem here, and we’d like to fix it. Because any kind of complementarity between the work we do now and the work that they’re doing during that period, then that doesn’t really undercut doing work now.I think that it’s good. We can then advance to do things like understand the nature of the problem, the nature of the alignment problem, understand much more about how difficult the problem is, set up institutions such that they’re prepared to make these investments, and I think those things are maybe a little bit better in fast takeoff worlds, but it’s not a huge difference. I think it’s not more than … intuitively, I think it’s not more than a factor of two, but I haven’t thought that much about it. It might be … Maybe it’s a little more than that.The short timelines thing I think is a much larger update."} {"prompt": "Yeah. Tell us about that.", "completion": "Just, so if you think that AI might be surprisingly soon, in general, what surprisingly soon means is that many people are surprised, so they haven’t made much investment. In those worlds, there’s a lot less, much less has been done. Certainly, if AI was developed in 50 years, I do not think it’s the case that the research I’m doing now could really, very plausibly be relevant, just because there’s so much time that other people are going to have to rediscover the same things.If you get a year ahead now, that means maybe five years from now you’re 11 months ahead of where you would have been otherwise, and five years later, you’re eight months of where you would have been otherwise. Over time, the advantage just shrinks more and more. If AI’s developed in 10 years, then something crazy happened, people were completely, the world at large has really been asleep at the wheel if we’re going to have human level AI in 10 years, and in that world, it’s very easy to have very large impact.Of course, if AI is developed in 50 years, it could happen that people are asleep at the wheel in 40 years. They can independently make those … I don’t know, you can invest now for the case that people are asleep at the wheel. You aren’t really foreclosing the possibility of people being asleep in the future. If they’re not asleep at the wheel in the future, then the work we do now is a much lower impact.It’s mostly, I guess, just a neglectedness argument where you’re not really expect up here AI to be incredibly neglected. If, in fact, people with short timelines are right, if the 15% in 10 years, 35% in 20 years is right, then AI is absurdly neglected at the moment. Right? In that world, what we’re currently seeing in ML is not unjustified heights but desperately trying to catch up to what would be an acceptable level of investment given the actual probabilities we face."} {"prompt": "Earlier you mentioned that if you have this two year period, where economic growth has really accelerated in a very visible way, that people would already be freaking out. Do you have a vision for exactly what that freaking out would look like, and what implications that has?", "completion": "I think there’s different domains and different consequences in different domains. Amongst AI researchers, I think a big consequence that a bunch of discussions that are currently hypothetical and strange, the way we talk about catastrophic risk caused by AI. We talk about the possibility of AI much smarter than humans, or we talk about decisions being made by machines, a bunch of those issues will cease to become, stop being weird considerations or speculative arguments and will start being this is basically already happening. We’re really freaked out about where this is going, or we feel very viscerally concerned.I think that’s a thing that will have a significant effect on both what kind of research people are doing and also how open they are to various kinds of coordination. I guess that’s a very optimistic view, and I think it’s totally plausible that … Many people are much more pessimistic on that front than I am, but I feel like if we’re in this regime, people will really be thinking about [prioritizing 01:06:13] the thing that’s clearly coming, and they will be thinking about catastrophic risk from AI as even more clear than powerful AI, just because we’ll be living in this world where AI is really … you’re already living in world where stuff is changing too fast for humans to understand in quite a clear way. In some respects, our current world has that character, and that makes it a lot easier to make this case than it would have been 15 years ago. But that will be much, much more the case in the future."} {"prompt": "Can you imagine countries and firms hoarding computational ability because they don’t want to allow anyone else to get in on the game?", "completion": "I think mostly I imagine defaults is just asset prices get bit up a ton. It’s not that you hoard competition so much as just computers become incredibly expensive and that flows backwards to semi-connector fabrication becomes incredibly expensive. IP chip companies become relatively valuable. That could easily get competed away. I think to first order, the economic story is probably what I expect, but then I think if you try it, if you look at the world, and you have, imagine asset prices and some area are raising by a factor of 10 over the course of a few years or a year, I think that it’s pretty likely that the normal … I think the rough economic story is probably still basically right, but markets, or the formal structure of markets is pretty easy to break down in that case.You can easily end up in the world where computation is very expensive, but prices are too sticky for actually prices to adjust in the correct way. Instead, that ends up looking like computers are still somewhat cheap, but now effectively they’re impossible for everyone to buy, or machine learning hardware is effectively impossible for people to buy at the nominal price. That world might look more like people hording computation, which I would say is mostly a symptom of an inefficient market world. It’s just the price of your computer has gone up by an absurd amount because everyone thinks this is incredibly important now, and it’s hard to produce computers as fast as people want them. In an inefficient market world, that may look like …. That ends up looking like freaking out, and takes the form partly of a policy response instead of a market response, so strategic behavior by militaries and large firms."} {"prompt": "Okay, that has been the discussion of how fast or gradual this transition will be. Let’s talk now about when you think this thing might happen. What’s your best guess for, yeah, AI progress timelines?", "completion": "I normally think about this question in terms of what’s the probability of some particular development by 10 or 20 years rather than thinking about a median because those seem like the most decision relevant numbers, basically. Maybe one could also, if you had very short timelines give probabilities on less than 10 years. I think that my probability for human labor being obsolete within 10 years is probably something in the ballpark of 15%, and within 20 years is something within the ballpark of 35%. AI would then have, prior to human labor being obsolete, you have some window of maybe a few years during which stuff is already getting quite extremely crazy. Probably AI [risk 01:09:04] becomes a big deal. We can have permanently have sunk the ship like somewhat before, one to two years before, we actually have human labor being obsolete.Those are my current best guesses. I feel super uncertain about … I have numbers off hand because I’ve been asked before, but I still feel very uncertain about those numbers. I think it’s quite likely they’ll change over the coming year. Not just because new evidence comes in, but also because I continue to reflect on my views. I feel like a lot of people, whose views I think are quite reasonable, who push for numbers both higher and lower, or there are a lot of people making reasonable arguments for numbers both much, like shorter timelines than that and longer timelines than that.Overall, I come away pretty confused with why people currently are as confident as they are in their views. I think compared to the world at large, the view I’ve described is incredibly aggressive, incredibly soon. I think compared to the community of people who think about this a lot, I’m more somewhere in, I’m still on the middle of the distribution. But amongst people whose thinking I most respect, maybe I’m somewhere in the middle of the distribution. I don’t quite understand why people come away with much higher or much lower numbers than that. I don’t have a good … It seems to me like the arguments people are making on both sides are really quite shaky. I can totally imagine that after doing … After being more thoughtful, I would come away with higher or lower numbers, but I don’t feel convinced that people who are much more confident one way or the other have actually done the kind of analysis that I should defer to them on. That’s said, I also I don’t think I’ve done the kind of analysis that other people should really be deferring to me on."} {"prompt": "There’s been discussion of fire alarms, which are kind of indicators that you would get ahead of time, that you’re about to develop a really transformative AI. Do you think that there will be fire alarms that will give us several years, or five or ten-years’ notice that this is going to happen? And what might those alarms look like?", "completion": "I think that the answer to this question depends a lot on … There’s many different ways the AI could look. Different ways that AI could look have different signs in advance. I think if AIs developed very soon, say within the next 20 years, I think the best single guess for the way that it looks is a sort of … The techniques that we are using are more similar to evolution than they are to learning occurring within a human brain. And a way to get indications about where things are going is by comparing how well those techniques are working to how well evolution was able to do with different levels of … different computational resources. On that perspective, or in that scenario, what I think is the most likely scenario within 20 years, I think the most likely fire alarms are successfully replicating the intelligence of lower animals.Things like, right now we’re kind of at the stage where AI systems are … the sophistication is probably somewhere in the range of insect abilities. That’s my current best guess. And I’m very uncertain about that. I think as you move from insects to small vertebrates to larger vertebrates up to mice and then birds and so on, I think it becomes much, much more obvious. It’s easier to make this comparison and the behaviors become more qualitatively distinct. Also, just every order of magnitude gets you an order of magnitude closer to humans.I think before having broadly-human level AI, a reasonably good warning sign would be broadly lizard-level or broadly mouse-level AI, that is learning algorithms which are able to do about as well as a mouse in a distributional environment that’s about as broad as the distribution environments that mice are evolved to handle. I think that’s a bit of a problematic alarm for two reasons. One, it’s actually quite difficult to get a distribution of environments as broad as the distribution that a mouse faces, so there’s likely to be remaining concern. If you can replicate everything a mouse can do in a lab, that’s maybe not so impressive, and it’s very difficult to actually test for some distribution environments. Is it really flexing the most impressive mouse skills?I think that won’t be a huge problem for people … A very reasonable person looking at the evidence will still be able to get a good indication, but it will be a huge problem for establishing consensus about what’s going on. That’s one problem. And then the other problem was this issue I mentioned where it seems like transformative impacts should come significantly before broadly human-level AI. I think that a mouse-level AI would probably not give you that much warning, or broadly mouse-level AI would probably not give you that much warning. And so you need to be able to look a little bit earlier than mice. It’s plausible that in fact one should be regarding … One should really be diving into the comparison to insects now and say, can we really do this? It’s plausible to me that that’s the kind of … If we’re in this world where our procedures are similar to evolution, it’s plausible to me the insect thing should be a good indication, or one of the better indications, that we’ll be able to get in advance."} {"prompt": "There was this recent blog post that was doing the rounds on social media called, “An AI Winter is Coming,” which was broadly making the argument that people are realizing that current machine learning techniques can’t do the things that people have been hoping that they’ll be able to do over the last couple of years. That the range of situations they can handle is much more limited and that the author expects that the economic opportunities for them are gonna dry out somewhat, and an investment will shrink. As we’ve seen, so they claim, in the past when there’s been a lot of enthusiasm about AI, and then it hasn’t actually been able to do the things that we claimed. Do you think there’s much chance that that’s correct, and what’s your general take on this AI boom, AI winter view?", "completion": "I think that the position in that post are somewhat … I feel like the position in that post is fairly extreme in a way that’s not very plausible. For example, I think the author of that post is pessimistic about self-driving cars actually working because they won’t be sufficiently reliable. I think its correct to be like, this is a hard problem. I think that … I would be extremely happy to take a bet at pretty good odds against the world they’re imagining. I guess I … I also feel somewhat similarly about robotics at this point. I think what we’re currently able to do in the lab is approaching good enough that industrial robotics can … That’s a big … If the technology is able to work well, it’s a lot of value. I think we’re able to in the lab is a very strong indication that that is going to work in the reasonably short term.I think those things are pretty good indications that, say, current investment in the field is probably justified by, or the level of investment is plausible given the level of applications that we can foresee quite easily, though I don’t wanna comment on the form of investment. There’s maybe a second … I think I don’t consider the argument in the post … I think the arguments in the post are kind of wacky and not very careful. I think one thing that makes it a little bit tricky is this comparison. If you’re compare the kind of AI we’re building now to human intelligence, I think literally until the very end, actually, probably after the very end, you’re just gonna be, look there’s all these things that humans can do that our algorithms can’t do. I think one problem that’s just kind of a terrible way to do the comparison. That’s the kind of comparison that’s predictably going to leave you being really skeptical until the very, very end.I think there’s another question, which is, and maybe this is what they were getting at, which is, there’s a sense maybe amongst the … especially certainly deep-learning true believers, at the moment, that you can just take existing techniques and scale them quite far. If you just keep going, things are gonna keep getting better and better, and we’re gonna get all the way to powerful AI like that. I think it’s a quite interesting question whether that is … If we’re in that world, then we’re just gonna see machine learning continue to grow, so then we would not be in a bubble. We would be in the beginning of this ramp up to spending some substantial fraction of GDP on machine learning. That’s one possibility. Another possibility is that some applications are going to work well, so maybe well get some simple robotics applications working well which could be quite large, that could easily have impacts in hundreds of billions or trillions of dollars. But, things are gonna dry up long before they get to human level. I think that seems quite conceivable. I would maybe be … Maybe I think it’s a little bit more likely that not that at some point things pull back. I mean it’s somewhat less than 50% that the current wave of enthusiasm is going to just continue going up until we build human level AI. But I also think that’s kind of plausible.I think people, they really want to call bubbles in a way that results in a lot of irrationality. I think Scott Sumner writes about this a lot and I mostly agree with his take. When enthusiasm about something gets really high, that doesn’t mean it’s guaranteed that it’s gonna continue going up. It can just a bet that there’s a one-third chance that it’s gonna continue going up or one-half chance, and I think that … People are really happy about being self-satisfied after calling the bubble, after calling a level of enthusiasm that’s unjustified. Sometimes they’re right ex-ante, and the fact that there are some people who are right, sometimes those calls are right ex-ante makes it a lot more attractive to take this position. I think a lot of the time, ex-post, it was fine to say this was a bubble, but ex-ante, I think it’s worth investing a bunch on the possibility that something is really, really important. I think that’s kind of where we’re at.I think that the arguments people are making that deep learning is doomed are mostly pretty weak. For example, because they’re comparing deep learning to human intelligence, and that’s just not the way to run this extrapolation. The way to run the extrapolation is to think about how tiny existing models are compared to the brain, think about on the model world they’re able to do a brain in 10 or 20 years. What should we be able to do now? And actually make that comparison instead of trying to say, look at all these tasks humans can do."} {"prompt": "What kinds of things should people do before we have an artificial and general intelligence in order to, I guess, protect themselves financially, if they’re potentially going to lose their jobs? Is there really anything meaningful that people can do to shield themselves from potentially negative effects?", "completion": "If the world continues to go, well … If all that happens is that we build AI, and it just works the way that it would work in an efficient market worlds, there’s no crazy turbulence, then the main change is, you shift from having … Currently two-thirds of GDP gets paid out roughly as income. I think if you have a transition too human labor being obsolete then you fall to roughly zero of GDP is paid as income, and all of it is paid out as returns on capital. From the perspective of a normal person, you either want to be benefiting from capital indirectly, like living in a state that uses capital to fund redistribution, or you just wanna have some savings. There’s a question of how you’d wanna … The market is not really anticipating AI being a huge thing over 10 or 20 years, so you might wanna further hedge and say … If you thought this was pretty likely, then you may want to take a bet against the market and say invest in stuff that’s gonna be valuable in those cases.I think that, mostly, the very naive guess is not a crazy guess for how to do that. Investing more in tech companies. I am pretty optimistic about investing in semiconductor companies. Chip companies seem reasonable. A bunch of stuff that’s complimentary to AI is going to become valuable, so natural resources bid up. In an efficient market world, the price of natural resources is one of the main things that benefits. As you make human labor really cheap, you just become limited on resources a lot. People who own Amazon presumably benefits a huge amount. People who run logistics, people who run manufacturing, etc. I think that generally just owning capital seems pretty good. Unfortunately, right now is not a great time to be investing, but still, I think that’s not a dominant consideration when determining how much you should save."} {"prompt": "You say it’s bad just because the stock market in general looks overvalued, based on price-to-earnings ratios?", "completion": "Yeah, it’s hard to know what overvalued means exactly, but certainly it seems reasonable to think of it in terms of, if you buy a dollar of stocks, how much earnings are there to go around for that dollar of stocks, and it’s pretty low, pretty usually low. This might be how it is forever. I guess if you have the kind of you that I’m describing, if you think we’re gonna move to an economy that’s growing extremely rapidly, then you have to bet that the rate of return on capital is gonna go up, and so it’s kind of… In some sense, you need to invest early because you wanna actually be owning physical assets, since that’s where all of the value is going to accrue. But. it’s also a bummer to lock in relatively low rates of return."} {"prompt": "In the normal scenario, where that doesn’t happen?", "completion": "No, in the … Even in … Suppose I make someone a loan. A way people normally hold capital would be making a loan … You make a loan now, you make a loan at 1% real interest for 20 years. You’re pretty bummed if then people develop AI, and now the economy is growing by 25% a year. Your 1% a year loan is looking pretty crappy.And you’re pretty unhappy about that. Stocks are a little bit better than that, but it depends a lot on … Yeah, stocks still take a little bit of a beating from that. I think this generally is a consideration that undercuts the basic … I think the basic thing you would do if you expected AI would be save more, earn more capital if you can. I think that’s undercut a little bit by the market being priced such that it’s hard, which could be a bunch of people doing that, if that’s not why it’s happening. Prices aren’t being bid up because everyone is reasoning in this way. Prices are being bid up just ’cause of unrelated cyclical factors."} {"prompt": "Let’s talk now about some of the actual technical ideas you’ve had for how to make machine learning safer. One of those has been called iterated intelligence distillation and amplification, sometimes abbreviated as IDA. What is that idea in a nutshell?", "completion": "I think the starting point is realizing that it is easier to train an AI system, or it currently seems easier to train in an aligned AI system, if you have access to some kind of overseer that’s smarter than the AI you’re trying to train. A lot of the traditional arguments about why alignment is really hard, or why the problem might be intractably difficult, really push on the fact that you’re trying to train, say, a superintelligence, and you’re just a human.And similarly, if you look at existing techniques, if you look at the kind of work people are currently doing in more mainstream alignment work, it’s often implicitly predicated on the assumption that there’s a human who can understand what the AI is doing, or there’s a human who can behave close to approximately rational, or a human who can evaluate how good the AI system’s behavior is, or a human who can peer in at what the AI system is thinking and make of that decision process.And sometimes its dependence is a little bit subtle, but I think, it seems to me like it’s extremely common. Even when people aren’t acknowledging explicitly a lot of the techniques are gonna have a hard time scaling to domains where the AI is a lot smarter than the overseer who’s training it. I think motivated by that observation you could say, let’s try and split the alignment problem into two parts, one of which is try and train an aligned AI, assuming that you have an overseer smarter than that AI, and the second part is actually produce an overseer who’s smart enough to use that process or smart enough to train that AI.The idea in iterative amplification is to start from a weak AI. At the beginning of training you can use a human. A human is smarter than your AI, so they can train the system. As the AI acquires capabilities that are comparable to those of a human. Then the human can use the AI that they’re currently training as an assistant, to help them act as a more competent overseer.Over the course of training, you have this AI that’s getting more and more competent, the human at every point in time uses several copies of the current AI as assistants, to help them make smarter decisions. And the hope is that that process both preserves alignment and allows this overseer to always be smarter than the AI they’re trying to train. And so the key steps of the analysis there are both solving this problem, the first problem I mentioned of training an AI when you have a smarter overseer, and then actually analyzing the behavior of the system, consisting of a human plus several copies of the current AI acting as assistants to the human to help them make good decisions.In particular, as you move along the training, by the end of training, the human’s role becomes kind of minimal, like if we imagine training superintelligence. In that regime, we’re just saying, can you somehow put together several copies of your current AI to act as the overseer? You have this AI trying to … Hopefully at each step it remains aligned. You put together a few copies of the AI to act as an overseer for itself."} {"prompt": "How is it that the kind of training AI and the human are gonna be smarter than the AI that they’re trying to train? I mean they’re trying to make something that’s smarter than them, right? But you’re saying that as a … That in combination, they can do a better job?", "completion": "Yeah, so the key step there, the reason why it’s plausible … Let’s suppose we’re trying to train our AI to some level of capability. Suppose that you imagine the AI gets to think for two seconds before each decision it makes. Then when, in this oversight process, we have multiple copies of the AI, each thinking for two seconds. If you imagine this in an analogy to humans, you have one smart human is making a decision, and they’re being overseen by a team of 20 slightly less-smart humans. And when I say slightly, I really mean slightly, like they’re one gradient step behind. The hope is, and this is a complicated question or an open question at the moment … If you have this team of 20 very slightly dumber aligned agents, then can they divide up the work of overseeing this AI, so that even they’re a tiny bit dumber, by working together, they can do it."} {"prompt": "Okay, so is the idea that by throwing more processing power or more instances behind the somewhat dumber algorithm, they’ll be smarter than the AI that they’re trying to train, and therefore able to keep it under control and give it good instructions in concert with the human?", "completion": "That’s roughly right, yeah. In particular, when we ask, what does this team have to do? We have this overseer, which is now a team of AIs working together. What does the team have to actually do? There may be two big classes of tasks they have to do. One is, they’re gonna provide the incentives to the AI. This team is gonna look, or AI is gonna take some action, then our team of overseer AIs is going to evaluate that action and decide how good it was, and that’s gonna be the rewards signal that we use to train our AI.In order for that to work, this team of 20 AIs needs to be able to make a better judgment about how good actions are than the AI that they’re trying to train. Right? Because they can’t … They’re sort of using this process. Can’t really train an AI to make smarter decisions than that team would make. And so, there, we hope that you can sort of break down … There’s this question, how good an action was action x? You hope that you can break down the problem of evaluating that action into a bunch of tasks, each of which are a little bit easier, and then you can split up those tasks across this team. That’s one of the tasks that the team needs to be able to do, this evaluation. Answer these evaluation questions.The second category that we don’t understand as well, which is like, they would like to ensure that the AI they’re training reliably does the thing they’re trying to incentivize it to do. For example, an example of a technique called adversarial training that you could use for this problem is, if I’m training some AI, and I’m concerned that in some situations it might behave badly, I can try and synthesize situations. I can try and construct hand-crafted situations, designed to look like situations where the AI might behave badly. And I can, instead of running, say, just on real examples, I also run it on these synthetic examples designed to cause problems.And so in order to that, I need to be able to do tasks like look at the AI and figure out whether the synthetic, what are the cases where it would perform poorly, and then also evaluate in such a case, did it do something unacceptable? I think it’s not crazy to think about this with an analogy to humans. Again, you have your team of 20 humans. You have one slightly smarter human, who’s maybe is … You’re concerned that under some conditions, they’re gonna try and break this process. They’re gonna try and go rogue, do something bad. And this team of 20 humans is responsible for interrogating this human and making sure there’s not a condition under which it’s going to behave poorly. Giving it test cases that hopefully it can’t tell, the human being trained can’t tell, from the real situations that it’s going to encounter.That’s an example of a technique that you could use for this process of reliability. There are other possible techniques you could use. And those are the two big tasks that this team of AIs needs to be able to solve. It was a team of humans in the analogy, but … This team needs to be able to evaluate behavior to produce good incentives, to train the AI to do what the team wants it to do, and it needs to be able to do some evaluation for reliability, which is not something we currently understand well."} {"prompt": "At its core, you’re going to try to get somewhat dumber AIs and humans together to come up with a training process by which they figure out whether this smarter AI that they’re trying to develop is behaving in the way that they want, by designing particular scenarios to test whether that’s the case. And even though they’re not quite as smart, I guess, in this model, because you’re throwing quite a lot of power behind that somewhat simpler task of just evaluating whether it’s doing the right thing, you hope that that way you’ll be able to gradually scale up and not lose alignment at any particular point in time.", "completion": "Yeah, that’s right. I guess it’s worth pointing out again that the … Generating scenarios, that’s one possible way of testing, trying to get the system to behave robustly, robustly to do the right thing. There are other possible approaches. You could also try and use an approach where you open up the brain of this AI you’re trying to train. Use something like interpretability techniques that people are currently working on, to understand how it’s thinking, and say, ah, now that I understand how it’s thinking, I see that here’s a place that it’s thinking that I wouldn’t want it to be thinking. And I can tell from that that it will fail in the scenario. Or I can just directly say, no, that was not a good way to be thinking about the problem, and penalize that. One of the major things this group is doing is just determining incentives for the AI that they’re training. This team of slightly dumber humans is just determining what … They’re evaluating the AI on realistic examples, on examples that appear in the real world and saying, how good was its behavior in this case? How good was its behavior in that case? And the AI is being trained to maximize those evaluations."} {"prompt": "By “incentives,” you mean, do we give it its reward? Do we give it whatever it’s been programmed to try to get?", "completion": "Yeah. I mean, formally, you would really be using gradient descents, where you’re like, yup, we take our AI, we take this evaluation that this team is providing, and then we modify the AI very slightly, so that it gets a slightly higher reward on that, a slightly higher evaluation, or it outputs actions that have higher evaluations on average. And in that setting, actually the AI that you’re starting with is exactly the same as the AIs who are on this team doing the oversight. But after you make this very small perturbation, that perturbation now hopefully gives you an AI that’s very slightly smarter than the other AIs on the team. The AI that’s actually thinking is exactly as smart as the ones on the team. It’s only as you consider these possible perturbations that you hope that the perturbations are like epsilon smarter. And that’s how training would normally work, were you’d have some evaluation, consider AI, run it, perturb it to get slightly better performance, repeat."} {"prompt": "Someone emailed me about IDA wanting me to ask you about it and said, “The context here is that I and many others think that IDA is currently the most promising approach to solving the alignment problem, largely because it’s the only real, actual proposal that anyone has made.” Do you think that’s right? And, more generally, what’s been the reaction to this general approach?", "completion": "Yeah. I would say the current situation is, I am very interested in really asking what solutions would look like in … as you scale them up. What is our actual game plan? What is the actual end-game here? That’s a question that relatively few people are interested in, and so very few people are working on. MIRI, the Machine Intelligence Research Institute, is very interested in that question, but they part ways with me by believing that that question is so obviously impossible that it’s not worth thinking about it directly, and instead we should be trying to improve our understanding of the nature of rational agency. That’s the reason, to me, why they are not in the business of trying to produce concrete proposals. It just seems doomed to them. Feels to them like we’re just patching holes in a thing that’s fundamentally not going to work.And most people in the broader ML community, I would say they take an attitude that’s more like, we don’t really know how the system is going to work until we build it. It’s not that valuable to think about in advance, what is the actual scheme going to look like. And that’s the difference there. I think that’s also true for many safety researchers who are like most, who are more traditional AI or ML researchers. They would more often say, look, I have a general plan. I’m not going to think in great detail about what that plan is going to look like because I don’t think that thinking is productive, but I’m gonna try and vaguely explain the intuitions, like maybe something like this could work. I think it sort of happens to be the case. Basically no one is engaged in the project of actually, say, here is what aligned AI might look like.I’m trying to aspire to the goal of actually write down a scheme that could work. There are a few other groups that are also doing that. I guess, at the OpenAI safety team, we also recently published this paper on safety via debate, which I think also has this form of being also an actual candidate solution, or something that’s aspiring to be a scalable solution to this problem. Jeffrey Irving was lead author on that. He’s a colleague on the OpenAI safety team. I think that’s coming from a very similar place. And maybe is, in some sense is a very similar proposal.I think it’s very likely that either both of these proposals work, or neither of them works. In that sense they’re not really totally independent proposals. But they’re getting at … They’re really pushing on the same facts about the world that let you make AI. Both of then are leveraging AI to help you evaluate your AI.I think the other big category is work on inverse reinforcement learning, where people are attempting to invert through human behavior and say, given what a human did, here’s what the human wants. Given what the human wants, we can come up with better plans to get what the human wants, and maybe that approach can be scalable. I think the current state of affairs on that is there are some very fundamental problems with making it work, with scaling it up, related to, how do you define what it is that a human wants? How do you relate human behavior to human preferences, given that humans aren’t really the kind of creature that actually has … There’s no slot in the human brain that’s where you put the preferences.I think unfortunately we haven’t made super much progress on that core of the problem, or what I would consider the core of the problem. I think that’s related to people in that area not thinking of that as being their current, primary goal. That is, they’re not really in the business of saying, and here we’re gonna try and write down something that’s just gonna work, no matter how powerful AI gets. They’re more in the business of saying, let’s understand. Clarify the nature of the problem, make some progress, try and get some intuition for what will alow us to make further progress, and how we could get ourselves in a position where, as AI improves, we’ll be able to adapt to that.I think it’s not a crazy perspective. But I think that’s how we come to be in this place where there are very, very few concrete proposals that are aspiring to be actual … a scheme you could write down and then run with AI and would actually [yield 01:33:10] AI. I think overall reaction is there’s two kind of criticisms people have, one of which is, this problem seems just completely hopeless. There’s a few reasons people would think that this iterative amplification approach is completely hopeless. They’re normally can be divided roughly into thinking that organizing a team of 20 AIs to be aligned and smarter than the individual AIs already subsumes the entire alignment problem. In order to do that, you would need to understand how to solve alignment in a very deep way, such that if you understood that, there’d be no need to do any of this, bother with any of the other machinery.The second common concern is that this robustness problem is just impossibly difficult. In the iterative amplification scheme, as we produce a new AI, we need to verify … Not only do we need to incentivize the AI to do well on the training distribution. We also need to sort of restrict it to not behave really badly off of the training distribution. And there are a bunch of plausible approaches to that that people are currently exploring in the machine learning community. But it’s … I think the current situation is, we don’t see a fundamental reason that’s impossible, but it currently looks really hard. And so many people are suspicious that problem might be impossible.That’s one kind of negative response is this … Maybe the problem, iterative amplification, cannot be made to work. The other kind of response is, it’s reasonably likely that AI safety is easy enough that we also don’t need any of this machinery. That is, I’ve described this procedure for trying to oversee AI systems that are significantly smarter than humans. Many of the problems on this perspective are only problems when you want things to be scalable to very, very smart AI systems. You might think, instead, look, we just want to build an AI that can take one “pivotal act,” that is an expression people sometimes use for an action an AI could take that would substantially improve our situation, with respect to the alignment problem. Say we want to build an AI which his able to safely take pivotal act. That doesn’t require being radically smarter than a human or taking actions that are not understandable to a human, so we should really not be focusing on or thinking that much about techniques that work in this weird, extreme regime.I guess even people in the broader ML community would say, look, I don’t know … They don’t necessarily buy into this framework of, can you just take a pivotal act? But they would still say, look, you’re worrying about a problem which is quite distant, it’s pretty likely that for one reason or another that problem is going to be very easy by the time we get there, or that one of these other approaches we can identify is just going to turn out to work fine. I think both those reactions are quite common. I think there’s also a reasonably big crowd of people who are like, yeah, I’m really interested, coming from a similar perspective to me, where they really want a concrete proposal that they can actually see how it could work. I think that those people tend to be, well, for those who aren’t incredibly pessimistic about this proposal, many of them are pretty optimistic about iterative amplification, or debate, or something along those lines."} {"prompt": "That’s a great answer. Yeah, I think it’s really creditable that you actually try to put out ideas for how we could deal with this, and I’ve seen, as you said, very few other people actually try to do that. And people can just read those ideas for themselves on your AI Alignment blog on Medium. You mentioned another approach that people have been talking about recently, which is debate as a way of aligning AI. You also mentioned inverse reinforcement learning. But we discussed that in the episode last year with Dario Amodei, so we’ll skip that one. But can you just describe the approach in the debate paper, which is somewhat similar, it sounds like, to IDA?", "completion": "Yeah. The idea is, we’re interested in training AI systems to make decisions that are in some respects too complicated for a human to understand. It’s worth pointing out that problems can appear probably long before AI is broadly-human level, because AI’s abilities are very uneven, so it can have an understanding of a domain that is way beyond human understanding of that domain, even while being subhuman in many respects. We want to train this AI to make decisions that are too complex for a human to understand. We’re wondering how do you get a training signal for such an AI? One way, one approach people often take is to pick some actual consequence in the world, like some simple consequence in the world that you could optimize, like whatever, you’re running a company, just … I don’t care how you’re making decisions about that company. All I care about is that they lead to the company having high profit.We’re interested in moving away from … I think there are serious concerns with that, from a safety perspective. We want to move more towards the regime where, instead of evaluating, yes, this decision had good consequences, but I don’t understand why, we’re evaluating a proposed decision and saying, yeah, we understand that that’s a good decision, so we’re going to give it a high reward because we understand why it’s good. That approach has … I mean, if an AI comes to you and says, “I would like to design the particle accelerator this way because,” and then makes to you an inscrutable argument about physics, you’re faced with this tough choice. You can either sign off on that decision and see if it has good consequences, or you can be like, no, don’t do that ’cause I don’t understand it. But then you’re going to be permanently foreclosing some large space of possible things your AI could do.Instead, the proposal is, we’re going to have two AIs. One AI’s gonna make a proposal. We can’t directly … That proposal counts on a bunch of complicated facts that we don’t necessarily understand. It’s gonna make some complicated argument about the economy in order to justify that proposal. And we couldn’t actually evaluate that argument. But if we introduce this adversarial agent who can explain to us why the proposal that was made is bad, and the original agent, if this critique has a flaw, the original can say, no, that critique is not a valid critique because, and point out the flaw. And then the critiquer can say, no, actually it was valid. They can go back and forth in this way.Then you can implicitly explore an exponentially large space of considerations. Because by giving the critiquer the option to pick any line of argument that they want in order to attack the proposal, you can verify that every possible line of argument, if the critiquer is not able to win, it suggests to you that every possible line of argument would have been unsuccessful. Every possible line of argument would have still left you thinking the proposal was a good one. It’s not clear if you can actually construct.Now we have some complicated question. Our AI is proposing to you an action. We would like to set up the debate such that the best action will actually win the debate. If two AIs propose actions, and one of them is proposing an action which is actually better, then it will be able to win a debate in which it establishes that its action is better.I think there are some plausibility arguments, like the one I just gave, that you’re exploring an exponentially large space of considerations. But this might be possible in cases where a human couldn’t have any idea about the task itself, or directly answering the question. It’s a very open question, exactly how powerful is debate? That is, if we set up a debate in the best possible way, so we give it, we have some human judge of this debate who’s evaluating the claims and counter-claims. If we give them optimal training and optimal advice, and then we have two very powerful agents debate in this way, we’d like it to be the case that the optimal strategy in this debate is being honest and actually telling the truth and then actually providing valid arguments for that and responding to counterarguments in a valid way. And we don’t know if that’s the case, but figuring out if that’s the case and then understanding in what cases we’re able to run such debates and it converges to truth, understanding how to set them up so they converge to truth, etc., does give a plausible way of training powerful AI systems."} {"prompt": "So how analogous is this approach to a case where say a person like me is trying to judge a difficult scientific issue, and I wouldn’t be capable of doing the original research and figuring out the truth for myself, but if there was scientists debating back and forth and one of them maybe was trying to be misleading in some way and another one was being truthful, the hope is that I would be able to figure out which one was telling the truth because I can at least evaluate the debate even if I couldn’t produce the arguments, myself?", "completion": "Yeah, so I think the situation’s pretty analogous to two human experts with lots of understanding you lack; you’re trying to understand the truth. You hope that if one of those experts is trying to make a claim that is true, then by zooming in on one consideration after another, you could find out if it’s true. You could eventually come to be very skeptical all the counterarguments or they could undermine all the counterarguments that were offered, and so I think that’s like … it’s definitely not an obvious claim. It’s not obvious in the context of human discussions. I think as a society, empirically, there aren’t great examples of covering really big gaps in expertise. Like, it’s often the case that two people with expertise in their area can have a debate in a way that convinces someone with slightly less expertise, but when there’s really large gaps, I don’t think we have a very good record of doing that kind of thing successfully.So, I’d say there’s more hope that this is possible, that a human could just evaluate some proposal produced by a sophisticated AI system, but it’s still very much an open question whether this kind of thing could actually work and one way you could try and assess that would be say, “We’re gonna get fairly serious about … have some serious experiments of trying to take people with considerable expertise in an area, have them have a debate arbitrated by someone with less expertise."} {"prompt": "What do you think is the biggest or most concerning criticism of AI safety via debate?", "completion": "Personally, I think the worst problem is just, is the basic question, do debates tend to favor accurate answers, or do they tend to favor answers that are easy to defend for reasons other than their accuracy? There’s a bunch of reasons the debate might favor an answer other than it being accurate. I think one that really leaps to people’s mind is, well, the judge is just a human. Humans have all sorts of biases and inconsistencies. That’s one reason that debate could favor answers other than the accurate one. I’m more personally concerned about maybe an even more basic question, which is, setting aside all human biases and all ways in which humans fail to reason well, I think it’s just an open question: does the structure of debate tend to promote truth? Does it tend to be the case that there’s some way to argue for the accurate position, even if the content of the debate, the thing you’re debating, is really, really complex compared to what the human can understand?"} {"prompt": "It seems like debate among humans is way better than random, anyway.", "completion": "I agree. Humans are … And clearly we’re able to get, at least in some cases, able to get much better answers than we’d get on our own. If I get to observe two experts debate a subject, even if one of them is actively trying to mislead me, I can arrive at better conclusion than I could have arrived at if I just wasn’t able to listen to their expertise or was only given a single expert, whose incentive was to look good.I think that the example of debates amongst humans makes it very unclear whether this procedure can be scaled arbitrarily far. An example you might think of is, consider a human who’s a smart person, who knows a lot about reason, who has been practiced a lot at judging debates. They don’t know any calculus. They’re not judging a debate between two quantum physicists about how to determine the results of some experiment on a recent particle accelerator. Just imagining that process, I can see how it could work, I can imagine it working, but it’s an incredibly intimidating prospect. This person is not gonna understand anything about … Over the course of the debate, there’s no way that they can form in their head a reasonable model of calculus, or of quantum mechanics, or of the standard model. And yet you hope that somehow the way they’re arbitrating this debate can implicitly answer extremely complex questions about … that depend on all those areas.I think this is the kind of test that you can do empirically. We can ask the empirical question, for a human who’s very good at, a very smart person who’s been trained to judge such debates, and then you have two people with a ton of expertise in an area they’ve never thought about come in, one of them trying to convince them of the truth and one trying to mislead them, is it the case empirically that humans can arbitrate such debates and actually, that the best way to win such a debate is to provide true facts about the domain to the human?I think if that’s the case, I think it’s actually, if that’s the case, it’s a very interesting fact, not just for the purpose of AI but just in general. I think it, prima facie, is just a really important question about the world. Are there norms of debate that allow you to consistently arrive at the truth in domains where the arbitrator doesn’t understand what’s true? That’s a question that’s relevant to a ton of domains.This version of the question is distinctive in some respects. I think mostly it’s distinctive because we are free to set things up in the way that’s maximally convenient. It’s kind of asking, under the best possible conditions, can debate be conducive to truth? Whereas most debates in our current society are under pretty highly suboptimal conditions, like very limited time, bad incentives on behalf of the judge, judges sampled from some population that doesn’t have a lot of time to think about how to judge such debates well, or isn’t that … hasn’t thought a lot about how to structure this to lead to truth. I think most debates in the real world are under pretty pessimistic conditions, but just understanding when does debate work, or what is the equivalent of debate truth, I think is a really … I would consider that a really fundamental and interesting question, completely independent of AI. I think it’s now also a particularly important question because it really is closely related to one of the most plausible strategies for training very powerful AIs to help us actually arrive at good advice or good conclusions."} {"prompt": "Are there other important pros and cons of this approach that are worth mentioning?", "completion": "So, I think there’s definitely a lot that could be said about it. There are a bunch of other issues that come up like when you start actually trying to do machine learnings, when you try and train agents to play this kind of game, then there’s lots of ways that that can be hard as a machine learning problem. You can have lots of concerns in particular with the dynamics of this game. So, some people maybe wouldn’t be happy that you’re training your AIs to be really persuasive to people. You might be concerned that makes some kind of familiar modes look more … crop up in more subtle ways or be more problematic.But, I really think the main thing is just: is it the case that a sufficiently sophisticated judge will be able … every judge defines a different game, like convincing me is a different game from convincing you. I think it’s clear that for weak enough judges, this game isn’t particularly truth conducive. There’s no reason that the honest player would have an advantage. The hope is that there is some level of sufficiently strong judges, above which, it’s the case that you converge over longer and longer debates to more accurate claims, yeah, it’s unclear. So, first question is: is there a threshold, and the second question: are humans actually above that threshold? If this was the case, if we have humans judge such debates, they will actually have honest strategies winning."} {"prompt": "What kind of people do you need to pursue this research? Are there any differences compared with other lines?", "completion": "So, again, I think there’s like a very similar … there’s a bunch of different questions that come up both for amplification and debate. I think different questions require different kinds of skill and different backgrounds. I think that both for amplification and debate, there is this more conceptual question. Or, I don’t know if conceptual is the right word. It’s a fact both about like the structure of argument and about the actual way humans make decisions, which is like, “Can humans arbitrate these debates and demands where they lack expertise? Or, in the amplification case, can you have teams addressing some issue where no individual can understand the big picture?And that, I mean, there’s a bunch of different angles you could take on that question. So, you could take a more philosophical angle and say, “What is going on there? Why should we expect this to be true, or what are the cases that might be really hard?” You could also just run experiments involving people, which seems relatively promising, but involves, obviously, a different set of skills, or you could try and study it in the context of machine learning and go straight for training … you might say, “Well, we could test these things with humans if we had very large numbers of humans.” Maybe, actually, the easiest way to test it is to be in the regime where we can train machines to effectively do things that are much, much more expensive than what we could afford to do with humans.So, you could imagine approaching it from a more philosophical perspective, a more, I don’t know, cognitive science or just organizing, maybe not even in an academic perspective, just putting together a bunch of humans and understanding how those groups behave, or a more machine learning perspective."} {"prompt": "What’s been the reception from other groups to this debate approach?", "completion": "So, I think there are many different groups and different answers for different groups. I would say that for many people, the nature of the problem or the alignment problem is very unclear when they first hear it stated in an abstract way, and so I think for a lot of people, it’s been very helpful to get a clear sense of what the problem is that you’re trying to solve. I think when you throw out this proposal, people both understand why debates are better than just giving an answer and having a human evaluate it, and they also can sort of see very clearly why there’s difficulty, like it’s not obvious that the outcome of the debate is in fact producing the right answer.So, I think from that perspective, it’s been extremely helpful. I think a lot of people have been able to get much more purchase understanding what the difficulties are in what we’re trying to do. I think for people who are more in like the ML side … again, it’s still been very helpful for having them understand what we’re trying to do, but I think the ML community is really very focused on a certain kind of implementation and actually building the thing, and so I think that community is mostly just sort of waiting til, “That’s a very interesting research direction,” and then their response is to wait until things either happen or don’t happen, til we’ve actually built systems that embody those principals to do something which you wouldn’t have been able to do without that idea."} {"prompt": "So, if we can use this approach to go from having like 60% accuracy to 70% or 80%, how useful is that? Do we need to be able to judge these things correctly almost all of the time, or is it more just like: the more often humans can make the right call, the better.", "completion": "Yeah, so, certainly, if you just had a judge who was like correct, but then 40% of the time, they err randomly, that would be totally fine. I guess that’s sort of gonna average out and it’s not a problem at all. What you really care about is just: in what cases are there, to what extent are there systematic biases in these judgements? So, to the extent that we consistently just make the wrong answer when the answer depends on certain considerations or in certain domains, and so, from that perspective, I guess the question is, “What class or problems can you successfully resolve with this technique, and if you push that frontier of problems a little bit further, you can solve a few more problems now than you could before. Are you happy?”I’d say there’s kind of two attitudes you could have on this. So, one: I guess the thing I really would like is a solution that just works in the sense that we have some principal reason to think it works. It works empirically. As we scale up our machine learning systems, it works better and better, and we don’t expect that to break down. That would be really great and sort of has, from a more theoretical perspective, that’s kind of what we’d like. There’s a second perspective you could have, which is just, “There’s this set of important problems that we want to apply machine learning systems to, so as we deploy ML systems, we think the world might change faster or become more complex in certain respects,” and what we really care about is whether we can apply machine learning to help us make sense of that kind of world or steer that world in a good direction, and so from that perspective, it’s more like there’s this list of tasks that we’re interested in and sort of the more tasks we can apply ML to, the better position we will be to cope with possible disruption caused by ML.And so, from that perspective, I think you’re just sort of happy every time you expand the frontier of tasks that you’re able to solve effectively. I think I also take that pretty seriously, so if it was the case that we could just push the set of tasks that we’re able to solve a little bit, I think that would improve our chances of coping with things well a little bit, but my main goal is, or the main focus, I think, as we are further away, if we’re having to think about things more conceptually or more theoretically, then I think it’s better to focus on having a really solid solution that we think will work all the time. As we get closer, then it becomes more interesting to say, “Great, now we see these particular problems that we wanna solve. Let’s just see if we can push our techniques a little bit so that ML systems can help us solve those problems.”"} {"prompt": "Do you think it’s possible that there’s an advantage to whoever’s trying to be deceptive in these cases, that in fact it’s easier for the person who’s trying to mislead or judge because they can choose from a wider range of possible claims that they could make whereas the person or the agent that’s trying to communicate the truth, they can only make one claim, which is the true one?", "completion": "Yeah, I guess a few points: maybe a first preliminary point is that, in general, if you had two agents, there wouldn’t be one assigned to the truth and one assigned to lie. Instead, they would both just be arguing whatever they thought would be most likely to be judged as honest and helpful, so in a world where it worked like that, there would just … neither participant in the debate would be trying to say anything true. Both of them would be arguing for some garbage if we were in this unfortunate situation.So then, in terms of the actual question, yeah, you could sort of imagine there is this giant space of things you could argue for. One of them is like … some tiny space of other things that we would actually regard on reflection as the most truthy and all the other stuff, yeah it’s a very, very tiny subset of the possible claims, and there’s a ton of other things that differ between different claims besides how actually useful are they and truthful are they?And so, I think, a priori, you would definitely be … it’s like a very, very surprising claim or very, very special claim to say the very best strategy from amongst all these strategies is the one that’s most truthful and helpful, so I definitely think your first guess, just if you didn’t know anything about the domain would be that there’s going to be some other properties. Maybe how nice it sounds is very useful like you wanna pick the thing that sounds nicest, or the thing that has the slickest soundbite in its favor or something like that.I think I am reasonably optimistic that if, say, a human judge is careful, they can sort of judge well enough that they have some … I’d say, if you’re a weak judge, this process can’t really get off the ground. You’re not able to at all correlate your judgements with truth. As you get to be a stronger judge, you hope that not only can you start to answer some questions; you can sort of bootstrap up to answering more and more complex questions. So, that is, you could say, “Well, if I just were to guess something off the top of my head, that has one level of correlation with truth.” Like, in easy enough cases, that’s going to be truthful. Then, if I have a short debate that sort of bottoms out with me guessing something off the top of my head, that can be a little bit more conducive to the truth. And now, if I have a long debate where after that long debate I can now have a short debate to decide which side I think wins, then I think that’s more likely to be conducive to truth.So, you would hope that you have sort of eliminating behavior where as you think longer and longer, the class of cases in which truthfulness becomes the optimal strategy grows, but yeah, I think it’s not obvious at all."} {"prompt": "What’s the best source for people who want to learn more about this approach? There’s a paper up on Archive, and I think also a blog post that came out after that that’s perhaps more extensive?", "completion": "I think the paper’s probably the best thing to look at. So, there’s a paper in the archive called AI Safety Via Debate. Like, it covers a lot of considerations and raises a lot of considerations, discusses possible problems, discusses how it compares to amplification, things like that. It presents some very simple toy experiments to show a little bit about how this might work in the context of machine learning. It doesn’t present sort of convincing example of a system which does something something interesting using debate, and so that’s what we’re currently … that’s what we’re currently working on, and so a reader who’s looking for that should maybe come back in six months. But, I think if you want to understand what is … yeah, if you want to understand why we’re interested in the idea or what is basically going on, then I think the paper’s a good thing to look at.Advances needed to implement these strategies"} {"prompt": "What would prevent us from implementing either of these strategies today? What advances do we need to actually be able to put them into practice?", "completion": "I think, depending on your perspective either unfortunately or fortunately, there’s really a ton of stuff that needs to be done. One category is just building up that basic engineering competence to make these things work at scale. In writing this process, it’s kind of like training an AI. Let’s consider the debate case, which I think is fairly similar in technical requirements, but may be a bit easier to talk about.We understand a lot about how to train AIs to play games well because that’s a thing we’ve been trying to do a lot. This, as an example of a game, has many differences from the games people normally train AIs to play. For example, it is arbitrated by a human, and queries to a human judge are incredibly expensive. That presents you with a ton of problems about, one, organizing the collection of this data. Using approximations. There’s this whole family of approximations you’re going to have to use in order to be able to actually train these AIs to play this game well. You can’t just have, every time they play the game, a human actually makes the evaluation. You need to be training models to approximate humans. You need to be using less-trusted evaluations. You need to be learning cleverly from data that’s passive data rather than actually allowing them to query the judge. That’s one …Technically, running this project at scale is hard for a bunch of reasons that AI is hard, and then also hard for some additional reasons distinctive to the role of humans in these kinds of proposals. It’s also hard, I guess, as a game, because it has some features that games don’t normally have. So we’re used to thinking of games with … there’s other technical differences beyond the involvement of humans that make these kind of hard engineering problems. And some of those are things that I’m currently working on, just trying to understand better. And, again, trying to build up the actual engineer expertise to be ready to make these things work at very large scale. So that’s one class of problems.A second class of problems is just figuring out … I think there’s maybe two things you could want. One is you want to be able to actually apply these schemes. You want to be able to actually run such debates and use them to train a powerful AI, but then you also want to understand much more than we currently understand about whether that’s actually going to work well. So in some sense, even if there was nothing stopping us from running this kind of training procedure right now, we’re going to have to do a lot of work to understand whether we’re comfortable with that.Do we think that’s good? Or do we think that we should do some other approach or maybe try harder to coordinate to avoid deploying AI. That’s a huge cluster of questions, some of which are empirical questions about how humans think about things. Like what happens in actual debates involving humans, what happens if you actually try and take 20 humans and have them coordinate in the application setting. It also depends on hard philosophical questions. Like I mentioned earlier, the question, “What should a super intelligent AI do?” If you had a formal condition for what it should do, then your problem would be a lot easier.Our current position is we don’t know. In addition to solving that problem, we’re going to be defining that problem. Like should is a tricky word. So that’s the second category of difficulties.There’s a third, big category of difficulties corresponding to … and the third category is maybe something we could just wait on. The current AI is not sophisticated enough to say run interesting debates. That is, if you imagine the kind of debate between humans that’s like interestingly promoting truth, that involves a very complicated learning problem the debaters have to solve. And to think right now, it feels like that problem is just at the limits of our abilities. Like you could imagine in some simple settings training that kind of AI. And so, one option would just be to wait until the AI improves and say we’re going to try and study these techniques in simpler cases and then apply them with the real messiness of human cognition only once the AI’s better.Another option would be to try and push safety out as far as one could go. So it’s actually starting to engage with the messiness of human cognition. And to be clear, the second step I suggested is philosophical difficulties and asking whether this is actually a good scheme. That’s totally going to have to, even today, involve engaging a ton with humans. Like that involves actually running debates, actually doing this kind of decomposition process that underlies amplification.So maybe those are the three main categories of difficulty that I see. I think all of them seem very important. I think my current take is probably that the most important ones are figuring out if this is a good idea rather than being actual obstructions to running the scheme. I think it’s quite realistic to relatively soon be at a place where you could use this procedure to train a powerful AI. And the hard part is just getting to the point where we actually believe that’s a good idea. Or we’ve actually figured out whether that’s a good idea. And then, I mean that’s not just figuring it out, it’s also modifying the procedures so that they actually are a good idea."} {"prompt": "Yeah that makes a lot more sense now.Okay, let’s push onto a different line of research you’ve been doing into prosaic AI alignment. You’ve got a series of posts about this on ai-alignment.com. Yeah, what’s kind of the argument you’re making? What is prosaic AI?", "completion": "So, I would describe this as a motivating goal for research, or a statement of what we ought to be trying to do as researchers working on alignment, and roughly what I mean by prosaic AI is AI which doesn’t involve any unknown unknowns, or AI which doesn’t involve any fundamental surprises about the nature of intelligence. So, we could look at existing ML systems and say whether or not I think this is likely, we could ask what would happen if you took these ideas and scaled these ideas up to produce something like sophisticated behavior or human-level intelligence, and then, again, whether or not that’s likely, we can sort of understand what those systems would look like much better than we can understand what other kinds of AI systems would look like just because they would be very analogous to the kinds of systems we could build today.And so, in particular, what that involves, I guess, if the thing we’re scaling up is something like existing techniques in deep learning, that involves defining an objective, defining a really broad class of models, a really giant … and that’s a complicated model involving attention and internal cognitive workspaces, and then just optimizing over that class to find something that scores well according to the objective, and so we’d imagine … yeah, so that’s the class of technique. That’s the basic technique, and you could say, “What would happen if it turned out that that technique could be scaled up to produce powerful AI?” That’s what I mean by prosaic AI, and then the task would be to say, “Supposing you live in that world, supposing we’re able to do that kind of scale-up, can we design techniques which allow us to use that AI for good or allow us to use that AI to do what we actually want, given that we’re assuming that AI can be used to have some really big transformative impact on the world.”Yeah, so there’s a few reasons you might think this is a reasonable goal for research. So, maybe one is that it’s a very … it’s like a concrete model of what AI might look like, and so it’s relatively easy to actually work on instead of sort of being in the dark and having to speculate about what kinds of changes might occur in the field. Second reason is that even if many more techniques are involved in AI, it seems quite likely that doing gradient descent or rich model classes is going to be one of several techniques, and so if you don’t understand how to use that technique safely, it’s pretty likely you’re going to have a hard time.Maybe a third reason is that I think there is actually some prospect that existing techniques will go further than people guess, and that’s a case that’s particularly important from the perspective of alignment, because, in that case, people, sort of by hypothesis, be caught a little bit by surprise. There’s not that much time to do intervening, or to do more work between now and then, so I think in general, I would advocate for a policy of, “Look at the techniques you understand currently and try and understand how to use those techniques to safely use those techniques, and then once you’ve really solved that problem, once you’re like, “Now we understand how to make, how to do gradient descent in a way that produces safe AI,” then you can go on and look towards future techniques that might appear and ideally to understand for each of the techniques that might play a role in building your AI, you’d have some analogous safe version of that technique, which doesn’t introduce problems with alignment but is roughly equally useful."} {"prompt": "So, I guess the people who wouldn’t be keen on this approach would be those who are confident that current methods are not going to lead to very high levels of general intelligence, and so they expect the techniques that you’re developing now just won’t be usable ’cause they’re gonna be so different.", "completion": "Yeah, I guess I’d say there’s two categories of people that might be super skeptical of this as a goal. One will be, as you said, people who just don’t believe that existing techniques are going to go that far or don’t believe that they’re going to play an important role in powerful AI systems, and then a second one would be those who think that’s plausible, but that the project is just doomed. That is that there is going to be no way to produce an analog of existing techniques that would be aligned, even if they could in fact play a role in sophisticated AI systems. I think both of those are reasonably common perspectives."} {"prompt": "Well, I think in a minute we’ll talk about MIRI, and I guess is perhaps a bit of a combination of the two of them.", "completion": "Yeah, they’re some of the strongest proponents of the second view that we’re super doomed in a world where sophisticated AI looks anything like existing systems."} {"prompt": "Can you lay out the reasons both for and against thinking that current techniques in machine learning can lead to general intelligence?", "completion": "Yeah, so I think one argument in favor, or one simple point in favor is that we do believe if you took existing techniques and ran them with enough computing resources, there’s some anthropic weirdness and so on, but we do think that produces general intelligence based on observing humans, which are effectively produced by the same techniques. So, we do think if you had enough compute, that would work. That probably takes, sort of if you were to run a really naïve analogy with the process of evolution, you might think that if you scaled up existing ML experiments by like 20 orders of magnitude or so that then you would certainly get general intelligence.So that’s one. There’s this basic point that probably these techniques would work at large enough scale, so then it just becomes a question about what is that scale? How much compute do you need before you can do something like this to produce human-level intelligence? And so then the arguments in favor become quantitative arguments about why to think various levels are necessary. So, that could be an argument that talks about the efficiency of our techniques compared to the efficiency of evolution, examines ways in which evolution probably uses more compute than we’d need, includes arguments about things like computer hardware, saying how much of those 20 orders of magnitude will we just be able to close by spending more money and building faster computers, which is …20 orders of magnitude sounds like a lot, but actually, you cover … we’ve covered more than 20 orders of magnitude so far, and we will cover a significant fraction of those over the current decade, or you can also try and run arguments on analogies. Look at how effectively or how much compute existing systems take to train and try and understand that. So, you could just try and say, based on what our experience so far, how much compute do you think will be needed?That’s like probably the most important class of arguments in favor. There’s other qualitative arguments like: there are lots of tasks that we’re able to do, so you’d probably want to look at what tasks we have succeeded at or failed at and try and fit those into that quantitative picture to make sense of it. But, I think it’s like not insane to say that existing systems seems like they’ve plausibly reached the level of sophistication of insects, so we are able to take this very brute force approach of doing search over neural nets and get behavior that’s … and this is totally unclear, but I think it’s plausible that existing behaviors as sophisticated as inspects. If you thought that, then I think it would constitute an argument in favor.Yeah, so I guess arguments against, probably the most salient argument against is just, “If we look at the range of tasks humans are able to accomplish, we have some intuitive sense of how quickly machines are able to do more and more of those tasks,” and I think many people would look at that rate of progress and say, “Look, if you were to extrapolate that rate, it’s just gonna take a very, very long time before we’re able to do that many tasks.” I think a lot of this is just people extrapolate things in very different ways. So, some people would look at being able to see the task an insect can do and say, “Wow, insects have reasonably big brains on a scale from nothing to human. We’ve come a substantial fraction of the way. We’re perhaps plausibly going to get there just by scaling this up.”Other people would look at what insects do and say, “Look, insects exhibit almost none of the interesting properties of reasoning. We’ve captured some very tiny fraction of that. Presumably, it’s gonna be a really long time before we’re able to capture even like a small fraction of interesting human cognition.”"} {"prompt": "What are the aspects of cognition that seem most challenging, or, I guess, are most likely to require major research insights rather than just increasing the compute?", "completion": "Again, with enough compute, you’d sort of expect, or I would be willing to bet that you would get everything in human cognition, and the question is, in some sense, which aspects of cognition are most expensive to produce in this way, or most likely to be prohibitively expensive such that you can’t just find them by brute force search. You have to actually understand them. So, natural things or properties of human cognition operate over very long timescales. Maybe evolution got to take at developing different notions of curiosity until it found a notion of curiosity that is effective, or a notion of play that was effective for getting humans to do useful learning.It’s not clear that you can evaluate. If you have some proposed set of motivations for a human that you’re rolling out, it’s not clear you can evaluate then by actually having a bunch of human lifetimes occur, and so if there’s a thing you’re trying to optimize where every time you have a proposal, you have to check it in order to check it, you have to run a whole bunch of human lifetimes, that’s going to take a whole lot of checks. And so, if there’s like cognitively complicated things that only … right, so maybe curiosity’s simple, but if you have a thing like curiosity that’s actually very complicated or involves lots of moving parts, then it might be very, very hard to find something like that by this brute force search.Things that operate over very short timescales are much, much, more likely to … then you can try a whole bunch of things. You can get feedback about what works, but things that operate over long timescales might be very hard."} {"prompt": "So, it sounds like you’re saying at some level of compute, you’re pretty confident that current methods would produce human-level intelligence and maybe much more. I think a lot of listeners would find that claim somewhat surprising, or at least being confident that that’s true. Yeah, what’s the reason that you think that?", "completion": "Yeah, so there’s a bunch of … there are a bunch of things to be said on this topic, so maybe a first thing to say is human intelligence was produced by this process of: try some random genomes, take those genomes which produce the organisms with the highest fitness, and then randomly vary those a little bit and see where you get. In order for that process to produce intelligence, you definitely need a bunch of things. At a minimum, you need to try a huge number of possibilities. Again, now we’re just discussing the claim that with enough compute would work. So, at a minimum, you need to try a whole bunch of possibilities, but you also need an environment in which reproductive fitness is a sufficiently interesting objective.So, one reason that you might be skeptical of this claim is that you might think that the environment that humans evolved in or the lower life evolved in, like, is actually quite complex, and we wouldn’t have access. Even if we had arbitrarily large amounts of compute, it wouldn’t actually be able to create an environment rich enough to produce intelligence in the same way. So, that’s something I’m skeptical of largely because I think … humans operate in this physical environment. Almost all the actual complexity comes from other organisms, so that’s sort of something you get for free if you’re spending all this compute running evolution cause you get to have the agent you’re actually producing interact with itself.I guess, other than that, you have this physical environment, which is very rich. Quantum field theory is very computationally complicated if you want to actually simulate the behavior of materials, but, it’s not an environment that’s optimized in ways that really pull out … human intelligence is not sensitive to the details of the way that materials break. If you just substitute in, if you take like, “Well, materials break when you apply stress,” and you just throw in some random complicated dynamics concerning how materials break, that’s about as good, it seems, as the dynamics from actual chemistry until you get to a point where humans are starting to build technology that depends on those properties. And, by that point, the game is already over. The point when humans are building technologies that really exploit the fact that we live in a universe with this rich and consistent physics, at that point, you already have human-level intelligence. Effectively, there’s not much more evolution occurring.So yeah, maybe on the environment side, I think most of the interesting complexity comes from organisms in the environment, and there’s not much evidence that considerable computational complexity of the world is actually an important part of what gives a human intelligence. A second reason people might be skeptical is they might … this estimate, this 20 orders of magnitude thing would come from thinking about the neurons in all the brains in all the organisms that have lived. You might think that maybe the interesting compute is being done early in the process of development or something about the way that genotypes translate into phenotypes. If you think that, you might think that the neuron counts are a great underestimate for the amount of interesting compute.Or, similarly, you might think other things in the organisms are more interesting than either development or neurons. I think that, like, my main position here is it really does look like we understand the way in which neurons do computing. A lot of the action is sending action potentials over long distances. The brain spends a huge amount of energy on that. It looks like that’s the way that organisms do interesting computing. It looks like they don’t have some other mechanism that does a bunch of interesting computing ’cause otherwise they wouldn’t be spending these huge amounts of energy implementing the mechanism we understand. It does look like brains work the way we think they work."} {"prompt": "So, I guess some people could think that there’s a lot of computation going on within individual neurons, but you’re skeptical of that.", "completion": "Yeah, so I think my view would be that mostly the hard thing about … say, if you wanted to simulate a brain, you can imagine there being two kinds of difficulties. One is simulating the local dynamics of neurons, and a second is moving information long distances, say, as you fire action potentials, and I think, most likely, both in the brain and in computers, the movement of information is actually the main difficulty. It’s like, the dynamics within the neuron just don’t … they might be very complicated. It might involve a lot of arithmetic operations to perform that simulation, but I think it’s not hard to compare to just shuffling that around, and shuffling that around, we have a much clearer sense of exactly how much happens because we know that there’s these action potentials. Action potentials communicate information basically only in the timing. I mean, there’s a little bit more than that. But we can basically … we know sort of how much information is actually getting moved."} {"prompt": "It looks like ones and zeros?", "completion": "Yeah, it looks like ones and zeros and most of the extra bits are in timing, and we sort of know roughly what level of precision there is, and so there’s not that many bits per action potential."} {"prompt": "So, I don’t have a lot of understanding of the specifics of how machine learning works, but I would think that one objection people might have is to say that even if you had lots of compute and you tried to make the parameters of this, the machine learning more accurate, just the structure of it might not match what the brain is doing, so it might just cap out at some level of capability because there’s no way for the current methods of … the current way that the data’s being transformed to actually be able to produce general intelligence. Do you think there’s any argument for that or is it just the case that the methods we have now at least at some level of abstraction are analogous to what the human brain is doing, and therefore, with a sufficient amount of compute, maybe a very, very high amount, but they should be able to copy everything that the human brain is doing?", "completion": "Yeah, so I would say that most of the time, machine learning would be fixed on architecture and then optimize over computations that fit within that architecture. Obviously, when evolution optimizes for humans, it does this very broad search over possible architectures like looking over genomes that encode, “Here’s how you put together a brain.” We can also do a search over architectures, and so the natural question becomes, “How effective are we at searching over architectures compared to evolution?” I feel like this is mostly in the regime of just a computational question. That is, we sort of know … I mean, the very highest level that evolution uses isn’t that complicated, sort of at the meta level, and so you could, in the worst case, just do a search at that same level of abstraction.I guess one point that we haven’t discussed at all but is, I guess, relevant for some, some people would consider super relevant is anthropic considerations concerning the evolution of humans. So, you might think that evolution only extremely rarely produces intelligent life, but that we happen to live on a planet where that process worked."} {"prompt": "Yeah, what do you make of that?", "completion": "So, I think it’s kind of hard to make it fit with the evolutionary evidence. This is something that, I think Carl Shulman and Nick Bostrom have a paper about this, and some other people have written about it periodically: I think the rough picture is that intelligence evolves like … if this is the case, if there’s some hard step in evolution, it has to be extremely early in evolutionary history, so in particular, it has to happen considerably before vertebrates, and probably has to have happened by simple worms."} {"prompt": "And why is that? ‘Cause those steps took longer than the later steps did?", "completion": "Well, so, one reason … I think the easiest reason to put it before vertebrates is just to say that cephalopods seem pretty smart and the last common ancestor between an octopus and a human is some simple worm. I think that’s probably the strongest evidence. That’s from this paper by Nick and Carl."} {"prompt": "Okay, because then we have another line that also produced substantial intelligence.", "completion": "Independently."} {"prompt": "Independently, and that would be incredibly suspicious if it had happened twice on the same planet, and there, we don’t have the anthropic argument, ’cause you could live on a planet where it only happened once.", "completion": "That’s right. You could think maybe there’s a hard step between octopi and humans, but then we’re getting into the regime where like sort of any place you look-"} {"prompt": "What is this hard step?", "completion": "Many things happen twice. Like, birds and mammals independently seem to become very, very intelligent. You could think that maybe in early vertebrates, there was some lucky architectural choice made in the design of vertebrate brains that causes on the entire vertebrate line intelligence will then sort of systematically increase quickly, but what was important was this lucky step, but at some point: you can try and run some argument before you might get stuck before humans. It seems pretty hard to do. It doesn’t seem very convincing and it certainly doesn’t seem like it would give you an argument for why you wouldn’t reach at least like octopus levels of intelligence. So, if you’re thinking that existing techniques are gonna get stuck anywhere around their current level, then this kind of thing isn’t going to be very relevant."} {"prompt": "Yeah, so I guess it kind of raises a definitional question of, “What is current techniques?” How much do you change the architecture before you say, “Oh, well this is no longer like current machine learning methods. This is no longer prosaic AI?”", "completion": "Yeah, so I think the thing that’s really relevant from the perspective of alignment research is you want to assume something about what you can do, and the thing you want to assume you can do is: there is some model class. You optimize over that model class given an objective. Maybe you care about whether the objective has to supply gradients. Maybe it doesn’t even matter that much. So then, as an alignment researcher, you say, “Great, the AI researchers have handed us a black box. The black box works as follows. The black box takes some inputs, produces some outputs. You specify how good those outputs were, and then the black box adjusts over time to be better and better.And, as an alignment researcher, as long as something fits within that framework, you don’t necessarily care about the details of, “What kind of architecture are we searching over? Are you doing architecture search? Or what form does the objective take?” Well, what form the objective takes, you may care about, but most other details you don’t really care about because alignment research isn’t going to be sensitive to those details.So, in some sense, you could easily end up with a system: existing ML researchers would say, “Wow, that was actually quite a lot different from what we were doing in 2018,” but which an alignment researcher would say “That’s fine. The important thing from my perspective is this still matches with the kind of alignment techniques that we were developing,” so we don’t really care how different it looks. We just care about, did it basically change the nature of the game from the perspective of alignment?"} {"prompt": "Yeah. Can we look backwards in history and say, would techniques that we developed five or ten years ago work on today’s architectures?", "completion": "Yeah, so we can look back. Hindsight is always complicated and hazardous, but I think you would say, if you were to, say, in 1990, perform a similar exercise and look across techniques, I would say certainly the kinds of things we’re talking about now would exist. That would be part of your picture. They would not have nearly as much, be nearly as much of a focal point as they are today because they hadn’t yet worked nearly as well as they worked now, so I guess we would be talking about what fraction of your field of view would these techniques occupy?So, I think it’s pretty safe to say that more than 10% of your field of view would have been taken up by the kind of thing we’re discussing now, and the techniques developed with respect, with that 10% of possibilities in mind would still apply. Existing systems are very, very similar to the kinds of things people are imagining in the late ’80s, and there’s a question like, “Is that number 10% or is it a third?” I think that’s pretty unclear and I don’t have enough of a detailed understanding of that history to really be able to comment intelligently, and I’d wanna defer to people who were doing research in the area at that time.I do think the … if you had instead focused on different kinds of techniques, like if you’d been around in the further past and you were, say, trying to do AI alignment for expert systems, I don’t feel that bad about that. I guess some people look back on history and say, “Man, that would have been a real bummer if you’d been alive in the ’60s and you’d done all this AI alignment research that didn’t apply to the kind of AI we’re building now,” and my perspective is kind of like, “Look, one, if it takes 50 years to build AI, it doesn’t matter as much what the details are of the AI alignment work you did in the ’60s. Two, actually, there’s a lot of overlap between those problems like many of the philosophical difficulties you run into alignment are basically the same, even between existing systems and expert systems.Three, I would actually be pretty happy with the world where like when people propose a technique, a bunch of AI alignment researchers invest a bunch of time understanding alignment for expert systems, and then 15 years later, they move onto the next thing. It’s like not that bad a world. I expect you would, in fact … if you just solved this sequence of concrete problems, that actually sounds pretty good. It sounds like a good way to get practice as a field. It sounds reasonably likely to be useful. There’s probably lots of commonalities between those problems. Even if they turn out to be totally wasted, it’s a reasonable bet an expectation: you sort of have to do … that’s the cost we have to pay if you want to have done a bunch of research for the techniques that are actually relevant unless you’re very confident the current techniques are not the things that will go all the way, or that it’s doomed. I think both those positions seem really, really hard to run to me. I haven’t heard very convincing arguments for either of those positions."} {"prompt": "What’s expert systems?", "completion": "The systems based on having a giant set, maybe reasoning rules and facts, and then they use these rules to combine these facts."} {"prompt": "And that just didn’t work?", "completion": "Yeah, there was a period where people were more optimistic about them. I don’t know the history very well at all. I think in general, certainly, it didn’t realize the ambitions of the most ambitious people in that field, and certainly, it’s not the shape of most existing, or the kinds of systems people are most excited about today."} {"prompt": "Okay, so, we mentioned that a group that has kind of a different view from this prosaic AI is the Machine Intelligence Research Institute at Berkeley. If I understand correctly, you got into AI safety in part through at least that social group or that intellectual group, but it seems like now, you kind of recommend … you kind of represent a different node or, what’s the term, access within the people working on AI safety. Yeah, how would you describe their view and how it differs from yours today?", "completion": "I would say the most important difference is they believe this prosaic AI alignment project is very likely to be doomed. That is, they think if the shape of sophisticated AI systems resembles the shape of existing ML systems, or if it in particular: you obtain sophisticated AI by defining a model class, defining an objective and doing gradient descent, finding the model that scores well according to the objective, then they think we’re just extremely doomed such that they think the right strategy is to instead step back from that assumption and say, “Can we understand other ways that people could build sophisticated AI?”Part of that is like, if you’re doing gradient descent or this big model … if you’re doing gradient descent to find a model that performs well, you’re gonna end up with some actual particular models. You’re gonna end up with some particular way of thinking that your giant neural net embodies, and you could instead … instead of just specifying procedures that give rise to that way of thinking, you could actually try and understand that way of thinking directly and say, “Great, now that we understand this, we can both reason about its alignment and maybe we can also design it more efficiently or we can more efficiently describe search procedures that will uncover it once we know what it is that they’re looking for.”And, I’d say that’s like the biggest difference, and the crux there is mostly the: is it possible to design alignment techniques that you make something like existing ML systems safe? And so, my view is that, mostly likely, that’s possible. Not most, like, more likely than not, not like radically more likely than not, but somewhat more likely than not, that’s possible and that as long as it looks possible and you have attractive lines of research to pursue and a clear path forward, we should probably work on that by default, and that we should … at some point, if it’s like, “Wow, it’s really hard to solve alignment for systems that look anything like existing ML,” then you really wanna understand as much as you can why that’s hard, and then you wanna step back and say, “Look, people in ML, it looks like the thing you’re doing actually is sort of like unfixably dangerous and maybe it’s time for us to think about weird solutions where we actually have to change the overall trajectory of the field based on this consideration about alignment.”If it’s not reasonable to call that weird, from the outside view, you might think, “Well, the goal of AI is to make things good for humans. It’s not crazy to change the direction of the field based on what is plausibly going to be alignable.”"} {"prompt": "But it would seem strange to them today?", "completion": "Yeah, people in ML would not be like, “Oh, that makes a lot of sense. Let’s just swap what we’re doing.” So, I guess, my position would be like, “I’m currently much, much more optimistic than MIRI people.” I think that’s the main disagreement about whether it will be possible to solve alignment for prosaic AI systems, and I think as long as we’re optimistic in that way, we should work on that problem until we discover why it’s hard or solve it."} {"prompt": "Just to make it more concrete for people, what are the kind of specific questions that MIRI is researching that they think are useful?", "completion": "I think, at this point, MIRI’s public research, the stuff that they publish on and talk about on the internet, one big research area is decision theory, so understanding, supposing that you have some agent which is able to make predictions about things or has views on the truth of various statements, how do you actually translate those views into decision? This is tricky, ’cause you want to say things like, “You care about quantities. What would happen if I were to do X,” and it’s not actually clear what would happen if I were to do X means. Is it causal, kind of factual, like a certain kind of statement? It’s not clear that’s the right kind of statement."} {"prompt": "So this is causal decision theory, evidential decision theory and-", "completion": "Yeah, and most of the stuff they’re considering seriously is like … like once you get really precise about and you’re like, “We’d like this to be an algorithm,” the whole picture just gets a lot weirder and there are a lot of distinctions that people don’t normally consider in the philosophy, philosophical community, that are kind of, “You have to wade through if you want to have … be in the place where you have a serious proposal for,” here is an algorithm for making decisions given as input like views on these empirical questions or given as input like a logical inductor or something like that.So, that’s one class of questions that they work on. I think another big class of questions they work on is like … I mean like stepping back from, looking at the whole problem and saying from a conceptual perspective, supposing you grant this view, this worldview about what’s needed, what … I don’t know a good way to define this problem. It’s kind of just like, “Figure out how you would build an aligned AI,” which is a good problem. It’s the very high level problem. I endorse some people thinking about the very high level problem. I think it’s one of the more useful things to think about. There’s some flavor of it that depends on what facts you … what you think are the important considerations or what you think the difficulties are, so they work on a certain version of that problem.I think other examples of things include … they’re very interested just in, “What are good models for rational agency?” So, we have such models in some settings like in Cartesian settings where you have an environmented agent that communicate over some channel where they send bits to one another. It becomes much less clear what agency means once you have an agent that’s physically substantiated in some environment. That is, what does it mean to say that a human is like a consequentialist thing acting in the world, given that the human is just actually like: some of the degrees of freedom in the world are leading together in this complicated way to make a human. It’s quite complicated to talk about what that actually means, that there’s this consequentialist in the world. That’s a thing they’re like super interested in.Yeah, figuring out how to reason about systems that are logically very complex including systems that contain yourself or contain other agents like you. How do we formulize such reasoning, is another big issue."} {"prompt": "Does MIRI have a different view as well about the likelihood of current methods producing general intelligence or is that-", "completion": "There’s probably some difference there. There’s a lot less stark than the other one. I think maybe a difference in that space that’s more close is like: I kind of have a view that’s more like: there’s probably some best way. There’s some easiest way for our current society as it’s currently built to develop AI, and the more of a change you want to make from that default path, the more difficult it becomes. Whereas I think the MIRI perspective would more be like: the current way that we build ML is reasonably likely to just be very inefficient, so it’s reasonably likely that if you were to step back from that paradigm and try something very different, that it would be comparably efficient and maybe more efficient, and I think that’s a little bit … yeah, I guess I don’t buy that claim. I don’t think it’s as important as the definite doom claim."} {"prompt": "So, what are the best arguments for MIRI’s point of view that the current methods can’t be made safe?", "completion": "So, I’d guess I’d say there’s two classes of problems that they think might be unresolvable. One is: if I perform … if I have some objective in mind … suppose I even have the right objective, an objective that perfectly tracks how good a model is, all things considered according to human values, and then I optimize really aggressively on that objective. The objective is still just a feature of the behavior of this: I have this black box I’m optimizing over the weights of some neural network, so now I have an objective that perfectly captures whether the behavior is good for humans, and I optimize really hard on that objective.So one of MIRIs big concerns is that even if we assume that problem is resolved and you have such an objective, then it’s pretty likely you’re going to find a model which only has these desirable properties like on the actual distribution where it was trained, and that it’s reasonably likely that in fact, that system that you’ve trained is going to be some consequentialist who wants something different from human flourishing and just happens on the trained distribution to do things that look good."} {"prompt": "Within that narrow range?", "completion": "Yes. So, an example of this phenomenon I that think MIRI people think is pretty informative, though certainly not decisive on its own, is like: humans were evolved to produce lots of human offspring, and that it’s the case that humans are sophisticated consequentialist whose terminal goal is not just producing offspring, so that even though the cognitive pulse that humans use is very good for producing offspring over human evolutionary history, it seems like it’s not actually great, sort of already broken down to a considerable extent, in the long run looks like it will break down to a much, much greater extent.So, then, if you were like a designer of humans being like, “I know. I’ve defined this objective that tracks how many offspring they have. Now I’m going to optimize it over many generations. I’m gonna optimize biological life over a million generations to find the life which is best at producing offspring,” you’d be really bummed by the results. So, their sort of expectation is that in a similar way, we’re going to be really bummed. We’re going to optimize this neural net over a very large number of iterations to find something that appears to produce actions that are good by human lights, or we’re going to find something whose relationship to human flourishing is similar to like humans’ relationship to reproduction where they sort of do it as a weird byproduct of a complicated mix of drives rather than because that’s the thing they actually want, and so when generalized, they might behave very strangely."} {"prompt": "Okay, sounds kind of persuasive. What’s the counterargument?", "completion": "So, I think there’s a few things to say in response. So, one is that evolution does a very simple thing where you sample environments according to this distribution, then you see what agents perform well on those environments. We, when we train ML systems, can be a little bit more mindful than that, so in particular, we are free to sample from whatever distribution over environments we are … any distribution we’re able to construct, and so as someone trying to solve prosaic AI alignment, you are free to look at the world and say, “Great, I have this concern about whether the system I’m training is going to be robust, or whether it might generalize in a catastrophic way in some new kind of context,” and then I’m free to use that concern to inform the training process I use, so I can say, “Great, I’m going to adjust my training process by, say, introducing adversary and having the adversary try and construct inputs on which the system is going to behave badly.”That’s something that people do in ML. it’s called adversarial training, and if you do that, that’s very different from the process evolution ran. Right now, you imagine that there’s someone roughly as smart as humans who’s like, constructing these weird environments, like if they’re looking at humans and say, “Great, the humans seem to care about this art shit,” then the adversary’s just constructing an environment where humans have lots of opportunities to do art or whatever, and then if they don’t have any kids, then they get down-weighted.If there’s some gap, if there’s some context under which humans fail to maximize reproductive fitness, then adversary can specifically construct those contexts, and use that to select against. Again, the reproductive fitness analogy makes this sound kind of evil, but you should replace reproductive fitness with things that are good.Yeah, so that’s one thing. I think the biggest thing, probably, is that, as the designers of the system, we’re free to do whatever we can think of to try and improve robustness, and we will not just like sample-"} {"prompt": "Yeah, we can look forward rather than just look at the present generation.", "completion": "Yeah, although it’s a challenging problem to do so, so that’s a thing a bunch of people work on. It’s not obviously they’ll be able to succeed, certainly. I don’t think that like this analogy should make you think, like … I think the analogy maybe says there is a problem. There is a possible problem, but doesn’t say, “And that problem will be resistant to any attempt to solve it.” It’s not like evolution made a serious attempt to solve the problem."} {"prompt": "Yeah. If you can make the method courageables that you can continue improving it, changing it, even as you’re going with an AI transforming in the wild, that seems like it would partially solve the problem, ’cause one of the issues here is that humans ended up with the motivations that they have, desires that they have, and then we’re going about it in a single generation or a handful of generations in evolutionary time, changing everything about the environment. The environment’s changing much faster than we are such that we’ve become … our drives no longer match what would actually be required to reproduce at the maximal rate, whereas if you were changing humans as we went, as our behavior ceased to be adaptive from that point of view, then perhaps you could keep us in line so we’d be fairly close to the maximal reproductive rate. Does that make sense?", "completion": "Yeah. I think that’s like an important part of the picture for why we have hope. That is, if you’re like, “Yeah, we’re gonna evolve a thing that just wants human flourishing, or we’re gonna like, do grand ascent until we find a thing that really wants me to flourish and then we’re gonna let it rip in the universe,” that doesn’t sound great. But, if you’re like, “We’re gonna try and find a thing which helps humans in their efforts to continue to create systems that help humans continue to create systems that help humans achieve, help humans flourish,” then that’s … I guess you could imagine in the analogy: instead of trying to evolve a creature which just cares about human flourishing, you’re trying to evolve a creature that’s really helpful, and somehow, “Be really helpful and don’t kill everyone and so on,” is like an easier, a more imaginable set of properties to have, sort of even across a broad range of environments than matching some exact notion of what constitutes flourishing by human lights.I think one reason that people at MIRI are from a similar school of thought would be pessimistic about that is they have this mental image of humans participating in that continuing training process, sort of training more and more sophisticated AIs. If you imagine a human is intervening and saying, “Here’s how I’m gonna adjust this training process,” or, “Here’s how I’m going to shape the course of this process,” it sounds kind of hopeless, because humans are so much slower, and in many respects, presumably so much less informed, less intelligent."} {"prompt": "That they might just be adding noise?", "completion": "Yeah. And it would be very expensive to have human involvement. Mostly, they wouldn’t be presuming to give direction, some random direction. I think the main response there is like: you should imagine humans performing this process … early on in this process, you should imagine humans being the ones adjusting objectives or adjusting the behavior of the system. Later on, you should imagine that as mostly being carried out by the current generation of AI systems, so the reason that humans can keep up is that process goes faster and faster, and so hopefully because we’re maintaining this property that there always a whole bunch of AI systems trying to help us get what we want."} {"prompt": "Will it continue to bottom out in some sense, what humans say about how the upper level is going? I’m imagining if there’s multiple levels of the most advanced AI then the less advanced, I guess this is kind of what we were talking about earlier. Then you’ve got kind of humans at the bottom. At some point, would they just disappear from the equation and it’s like all-", "completion": "Yeah, so it’s always going to be anchored to what a human would have said. In some sense, that’s like the only source of ground truth in the system. Humans might not actually be … there might be some year beyond which humans never participate, but at that point, the reason that would happen would be because there is some system. Suppose in the year 2042, humans stop ever providing any input to AI systems again, the reason that would be possible is that in the year 2042 there was some AI system which we already trust robustly, “Do well enough according to human lights.”"} {"prompt": "And it can do it faster and cheaper.", "completion": "Yeah. It’s a little bit tricky to ever have that handoff occur, because that system in 2042, the one that you’re trusting to hand things off to, has never been trained on things happening in 2043, so it’s a little bit complicated, and it’s not that you’re gonna keep running that same system in 2042. It’s that that system is going to be sufficiently robust that it can help you train the system in 2043 that it’s going to … yeah."} {"prompt": "Yeah, if you could visit 50 years in the future and see everything that was happening there, how likely do you think it would be that you would say, “My view of this was broadly correct,” vs, “MIRI’s view was more correct than mine with hindsight?” I’m trying to measure how confident you are about your general perspective.", "completion": "Yeah, so I certainly think there are a lot of cases where it would be like, “Well, both views were very wrong in important ways,” and then you could easily imagine both sides being like, “Yeah, but my view was right in the important way,” so that’s certainly a thing which seems reasonably likely. In terms of thinking in retrospect that my view was like unambiguously right, I don’t know, maybe I’m on like, relative to MIRI’s view, maybe I’m at 50-70% … that’s pretty high, whatever, like 50-60% that like in retrospect we’ll be like, “Oh, yeah, this was super clear,” and then maybe on the other side I would put a relatively small probability that’s super clear in the other direction, that like maybe 20% or 10% or something that like in retrospect I’m like, “Geez, I was really wrong there. Clearly my presence in this debate just made things worse.”"} {"prompt": "And then there’s kind of a middle ground of both of them had important things to say, yeah?", "completion": "Yeah."} {"prompt": "Interesting. So, you’re like reasonably, reasonably confident, but it seems like but you would still support, given those probabilities, MIRI doing substantial research into their line of inquiry.", "completion": "Yeah, I’m excited about MIRI doing the stuff MIRI’s doing. I would prefer that MIRI people do things that were better on my perspective, which I suspect is most likely to happen if they came to agree more with this perspective."} {"prompt": "But at some point, let’s say that your line of research had four times as many resources or four times as many people, then you might say, “Well, having one more person on this other thing could be more useful,” even given your views, right?", "completion": "Yeah, although, I don’t think … the situation is not like there’s line of research A and line of research B and the chief disagreement is about which line of research to pursue. It’s more like: if I was doing something very, very similar to what MIRI’s doing, or doing something superficially quite similar, I would do it in a somewhat different way, so, to the extent I was working on philosophical problems that clarify our understanding of cognition or agency, I would not be working on the same set of problems that MIRI people are working on, and I think those differences matter probably more than the high-level, “What does the research look like?” So, lots of stuff in the general space of research MIRI’s doing that I’d be like, “Yep, that’s a good thing to do,” which, now we’re in this regime of, yeah, it depends on how many people are doing one thing vs doing the other thing."} {"prompt": "Do you think that those … if your view is correct, is there going to be much like incidental value from the research that MIRI is doing or is it kind of just by the by at that point?", "completion": "So, one way research of the kind MIRI’s doing is relevant is to clarifying whether, when they talk about amplification or debate, they each have this conceptual, key conceptual uncertainty. In the case of debate, is it the case that debates lead to … that the honest strategy, telling the truth, saying useful things is actually a winning strategy in a debate, or in the case of amplification, is there some way to assemble some large team of aligned agents such that the resulting system is smarter than the original agents and remains aligned. Those conceptual difficulties seem not at all unrelated to the kinds of conceptual … like if you’re asking, “How would we build an aligned AI using infinite amounts of computing power without thinking at all about contemporary ML?” That’s a very similar kind of question, thinking, “What are the correct normative standards of reasoning that you should use to evaluate computing claims, what are, when you compose these agents, like, what kind of decomposition of cognitive work is like actually alignment preserving or do we expect to produce correct results?So, a natural way in which the kind of research MIRI’s doing could add value is by shedding light on those questions. And an expectation, I’d guess they’re at least several times less effective at answering those questions than if they were pointed at them more directly, you know? I don’t know if it’s like five times less effective than if they were pointed at them directly. I think it’s a smaller multiple than that, probably."} {"prompt": "What would you say to people who are listening who just feel kind of agnostic on whether prosaic AI approach is the best or MIRI’s is?", "completion": "You mean what would I say in terms of what they ought to do?"} {"prompt": "Yeah, or maybe what they ought to think or things to consider if they maybe don’t feel qualified to judge this debate.", "completion": "Sure, in terms of what to do, I suspect comparative advantage considerations will generally loom large, and so if one’s feeling agnostic, those will likely end up dominating or, comparative advantage plus short-term what would be the most informative, involve the most learning, build the most flexible capital. In terms of what to think, all things considered, I don’t know. That seems pretty complicated. It’s going to depend a lot on what kind of expertise, just in general, looking at a situation with conflicting people who’ve thought a lot about the situation, how do you decide whose view to take seriously in those cases? To be clear, the spectrum of views amongst all people, my view is radically closer to MIRI’s than almost anyone else in the machine learning community on most respects. There are other respects in which the machine learning community is closer to MIRI than I am. So like, the actual menu of available views is unfortunately even broader than this one."} {"prompt": "If I’m broader than Paul Christiano’s view and MIRI’s generalized view.", "completion": "Indeed."} {"prompt": "It’s unfortunate. You’re saying there’s a third option?", "completion": "Yeah. In fact, yeah, it’s quite a lot broader. Like, I think being agnostic is not a crazy response. I think there’s an easy position where, well, the most confident claims, sort of all these perspectives like differ substantially on emphasis, but one could basically could significant probability on all of the most confident claims from every perspective. Yeah, certainly the convex combination of them will be more accurate than any particular perspective, and then in order to do significantly better than that, you’re going to have to start making more seriously claims about who you’re willing to ignore."} {"prompt": "Who to trust, yeah. What would you say is kind of the third most plausible broad view?", "completion": "I think one reasonably typical view in the machine learning to which I’m sympathetic is the, “All of this will be mostly okay.” As AI progresses, we’ll get a bunch of empirical experience messing around with ML systems. Sometimes they’ll do bad things. Correcting that problem will not involve heroic acts of understanding-"} {"prompt": "Safety specifically, or alignment, specifically, not beyond what might happen anyway.", "completion": "Yeah, and it’s a little bit hard. You could separate that out into both the claim about what will happen anyway and a claim about what is required. I guess, the views we were talking about for me and MIRI were more about what is required. We’ve separate disagreements about what will likely happen. I think there’s a different ML position on what is likely to be required, which says more like, “Yeah, we have no idea what’s likely to be required. It’s reasonably likely to be easy, any particular thing we think is reasonably likely to be wrong,” and that’s like … I could try and flesh out the view more, but roughly, it’s just like, “We don’t know what’s going to happen and it’s reasonably likely to be easy, or by default, expected to be easy.”I think there’s a reasonable chance in retrospect that looks like a fine view. Yeah, I don’t see how to end up with high confidence in that view, and if you’re like a 50% chance of that view, it’s not gonna have that huge an effect."} {"prompt": "On your expected value of working on safety, yeah?", "completion": "Yeah, it may-"} {"prompt": "It only halves it at worst, yeah? Or at best.", "completion": "Yeah, and increasing that probability from … if you give that a significant probability, that might matter a lot if you have a, “We’re definitely doomed,” view. So, I think on the MIRI view, maybe accepting, giving significant credence to the machine learning perspective would significantly change what they would do, ’cause they currently have this view where you’re kind of at like zero. I don’t know if you’ve seen this post on the logistics success curve that Eliezer wrote."} {"prompt": "I haven’t.", "completion": "The idea is that if you’re close to zero, then most interventions … if your probability of success is close to zero, then most interventions that look common-sensically useful aren’t actually going to help you very much ’cause it’s just going to move you from like 0.01% to 0.02%."} {"prompt": "So this would be a view that’s kind of, “You need many things all at once to have any significant chance of success,” and so just getting one out of 100 things you need doesn’t move you much.", "completion": "That’s right, just making organizations like a little bit more sane or fixing one random problem here or one random problem there isn’t much going to help. So, if you have that kind of view, it’s kind of important then, you’re putting really low probability on this. This isn’t the only perspective in ML, but it’s one conventional ML perspective. I think on my view, it doesn’t matter that much if you give that 30% or 50% or 20% probability. I think that probability’s not small enough that you should discount that case, interventions that are good, if the problem’s not that hard, it seems like they’re likely to be useful, and also it’s not high enough that-"} {"prompt": "I would have thought that it would make little difference to your strategy ’cause in that case things would be, “Okay, you don’t really have to do anything,” so you can almost just … even if you think it’s 50/50 whether any of this is necessary or not, you can just largely ignore that.", "completion": "Yeah, that’s what I’m saying. It doesn’t make a huge difference. A way in which it matters is like: you might imagine there are some interventions that are good, like in worlds where things are hard, and there’s 50/50 and those interventions are still good. Maybe they’re half as good as they otherwise would have been, like we were saying, and there’s some interventions that are good in worlds where things are easy. That is, you might be like, “Well, if things were easy, we could still fuck up in various other ways and make the world bad,” and so, reducing those probabilities, I would say that’s also a valuable intervention because the probability of things-are-easy is not low enough that that’s getting driven down to zero."} {"prompt": "So then just more normal world improvement, or making it more likely that we encode good values? So, if the alignment problem is solved, then it becomes more a question of, “What values will we in fact program into an AI?” And trying to make sure that those are good ones.", "completion": "Yeah, there’s a lot of things that actually could come up in that world. So, for example, your AI could have a very … if an AI has a very uneven profile of abilities. You could imagine having AI systems that are very good at building better explosives or designing more clever biological weapons that aren’t that good at helping us accelerate the process of reaching agreements that control the use of destructive weapons or better steering the future.So, another problem independent of alignment is this uneven abilities of AI problem. That’s one example. Or we might just be concerned that as the world becomes more sophisticated, there will be more opportunities for everyone to blow ourselves up. We might be concerned that we will solve the alignment problem when we build AI, then someday that AI will build future AI and it will fill out the alignment problem. So, there’s lots of extra problems you could care about."} {"prompt": "I suppose there’s also: AI could be destabilizing to international relations or politics or be used for bad purposes even though if … so we can give it good instructions and we’ll give it instructions to cause harm.", "completion": "Yeah, so then there’s a question of how much you care about that kind of destabilization. I think most people would say they care at least some. Even if you have a very focused-on-the-far-future perspective, there’s some way in which that kind of destabilization can lead to irreversible damage. So yeah, there’s a bunch of random stuff that can go wrong with AI and you might become more interested in attending to that or saying, “How do we solve those problems with a mediocre understanding of alignment if a mediocre understanding of alignment doesn’t automatically doom you?”"} {"prompt": "Yeah, is there anything else you wanna say on, I guess, MIRI before we move on? Obviously, at some point, I’ll get someone on from there to defend their view and explain what research they think is most valuable, hopefully some time in the next couple of months.", "completion": "Yeah, so I guess one thing I mentioned earlier, there were like these two kinds of concerns or two kinds of arguments that would give that we’re super doomed on prosaic if AI looks like existing ML systems. I mentioned one of them, this, even if you have the right objective, it’s plausible that the thing you produce will have some other real objective of the consequentialist who just incidentally is pursuing that objective on a training distribution.There’s a second concern that actually constructing a good objective is incredibly difficult, so in the context of the kinds of proposals that I’ve been discussing, like in context of iterative amplification, they’d then be saying, “Well, all the magic occurs in the step where you aggregate a bunch of people to make better decisions than those people who made it alone,” and in some sense, any way that you try and solve prosaic AI alignment is going to have some step like that where you are implicitly encoding some answer to the alignment problem in the limit of infinite computation.‘Cause they might think that that problem like alignment in that limit is still like sufficiently difficult or has all the core difficulties in it so that it’s not clear. This might say that we’re doomed under prosaic AI alignment, but more directly, it would just say, “Great, we need to solve that problem first, anyway,” ’cause there’s no reason to work on the distinctive parts of prosaic AI alignment rather than trying to attack that conceptual problem and then learning that we’re doomed or having a solution which we could then … maybe it would give you a more direct angle of attack."} {"prompt": "So, you’re on the board of a newish project called Ought. What is that all about?", "completion": "So, the basic mandate is understanding how we can use machine learning to help humans make better decisions. The basic motivation is that we are super interested: if machine learning makes the world a lot more complicated and is able to transform the world, we want to also ensure that machine learning is able to help humans understand those impacts and steer the world in a good direction. That’s, in some sense, what the alignment problem is about. You want to avoid a situation that’s gonna mismatch between how well AI can help you develop new technologies and how well AI can help you actually manage this more complicated world it’s creating.I think the main project, or certainly, that project that I am most interested in is on what they call factored cognition, which is basically understanding how you can take complex tasks and break them down into pieces where each piece is simpler than the whole, so it doesn’t depend on the whole context, and then compose those contributions back to solve the original task.So, you could imagine that in the context of taking a hard problem and breaking down to pieces that individual humans can work on, so like, say, a hundred humans, you don’t want any of them to have to understand the entire task. You want to break off some little piece that that person can solve, or you can think of it in the context of machine learning systems. In some sense, the human version is most interesting because it is a warmup or a way of studying in advance the ML’s version.So, in the ML version, that would be, now instead of 100 people, you have some people and a bunch of ML systems, which have some set … maybe an ML system has more limited ability to respond to complex context, like a human has a context in mind when they articulate this problem. ML system has some limited ability to respond to that context, or fundamentally, I think the most interesting reason to care about breaking tasks down to small pieces is because once you make the task simpler, once an ML system is solving some piece, it becomes easier to evaluate its behavior and whether it’s behavior is good.So, this is very related to in the way that iterated amplification hopes to solve AI alignment is by saying we can inductively train more and more complex agents by composing weaker agents to make stronger agents, so this factor cognition project is … it is one possible approach for composing a bunch of weaker agents to make a stronger agent, and in that sense, it’s like one of the main … addressing one of the main ingredients you would need for iterated amplification to work.I think right now it’s kind of the main project that’s aiming at acquiring evidence about how well that kind of composition works, again, in the context of like just doing it with humans since humans are something we can study today. We can just recruit a whole bunch of humans. There’s like a ton of work in like actually starting to resolve that uncertainty, and we can learn about … there’s a lot of work we’ll have to do before we can be able to tell, “Does this work? Does this not work?” But I’d say that’s one of the main things I was doing right now, the reason I’m most excited about it."} {"prompt": "Is this a business?", "completion": "It’s organized as a nonprofit."} {"prompt": "Is Ought hiring at the moment and what kind of people are you looking for?", "completion": "Yeah, so I think there are some roles that will hopefully be resolved by the time we’re hired for, by the time this podcast comes out. Some things that are likely to be continuing hires are researchers who are interested in understanding this question, understanding and thinking about how you compose the small contributions and solutions to harder tasks, and that’s a … there are several different disciplines that potentially bear on that, but sort of people who are interested in computer science, are interested in like … the approach they’re taking, things that are interested in programming languages are also a reasonable fit, people who are just … I think there’s some stuff that doesn’t fit will including in the academic discipline but if you just think about the problem, “How do you put together a bunch of people? How do you set up these experiments? How do you help humans be able to function as parts of a machine?”So, researchers who are interested in those problems is one genre, and another is engineers who are interested in helping actually build systems that will be used to test possible proposals or will substantiate the best guess about how to solve those proposals, and those will be … in contrast, Open AI is hiring researchers and engineers in ML, so sort of engineers would then be building ML systems, testing ML systems, debugging and improving and so on, ML systems. I think at Ought, similarly hiring both researchers and engineers and people in between, that they are the focus that’s lost on ML. It’s more on, again, building systems that will allow humans to … humans and other simple automation to collaborate, to solve hard problems, and so it is more … it involves less of a distinctive ML background. It’s more potentially a good fit for people who have software engineering background and the problem’s interesting and they have some relevant background, or just the problem’s interesting and they have a broad background in software engineering."} {"prompt": "Okay, well, I’ll stick up a link to that, to the Ought website with more information on specifically what it’s doing and I guess what vacancies are available whenever we manage to edit this and get it out.", "completion": "Cool."} {"prompt": "Okay let’s talk about what listeners who are interested working on this problem could actually do and what advice you have for them. So we’ve had a number of episodes on AI safety issues, which have covered these topics before with Dario Amodei, your colleague, as I mentioned. Jan Leike at DeepMind. As well as Miles Brundage and Allan Dafoe at FHI working on more policy and strategy issues. Do you have a sense of where your advice might deviate from those of those four people or just other people in general on this topic?", "completion": "So I think there’s a bunch of categories of work that need to be done or that we’d like to be done. I think I’d probably agree with all the people you just listed about. Each of them, presumably, would have advocated for some kind of work. So I guess Dario and Jan probably were advocating for machine learning work that really tried to apply or connect ideas about safety to our actual implementations. Filling up the engineering expertise to make these things work. And acquiring empirical evidence about what works and what doesn’t. And I think that project is extremely important. And I’m really excited about EA’s training up in ML and being prepared to help contribute to that project. Like figuring out whether ML’s a good fit for them. And then, if so, contributing to that project. I guess I won’t talk more about that because I assume it’s been covered on previous podcasts.I’d probably also agree with Miles and Allan about there’s like a bunch of policy work and strategic work that seems also incredibly important. I also won’t talk more about that.I think some categories of work that I consider important that I wouldn’t expect those people to mention. I think for people who with a background in computer science, but not machine learning. Or who don’t want to work in machine learning have decided that’s not the best thing. Don’t enjoy machine learning. I think there’s a bunch of other computer science work that’s relevant to understanding the mechanics and proposals. Like debate or amplification.So an example would be like, right now, Ought, one of their projects is on factored cognition. So, in general, on how you can take a big task and decompose it into pieces which don’t depend on the entire context, and then put those pieces together in a way that preserves the semantics of the individual agents or the alignment of the individual workers. So that’s a problem which is extra important in the context of machine learning or in the context of endured amplification. But that one can study almost entirely independent of machine learning.It is one can just say, like let’s understand the dynamics of such decomposition. Let’s understand what happens when we apply simple automation to that process. Let’s understand what tasks we can decompose and can’t. Let’s understand what kind of interface or what kind of collaboration amongst agents actually works effectively. So that’s an example of a class of questions which depend on a sort of well studied from a computer science perspective, but aren’t necessarily machine learning questions. Which I’d be really excited to see work on. And there’s similar questions in the debate space where just understanding how do we structure such debates. Do the lead to truth? Etc.I think one could also study those questions not from a computer science perspective at all. But I think it’s like super reasona … Like, I don’t know. I think philosophers differ a lot in their taste. But like, for example, if you’re a philosopher interested in asking a question about this area, then I think under what conditions do debate lead to truth is not really a question about computers in any sense. It’s the kind of question that falls under computer scientist’s sensibilities, but I think that taking a really technical but not necessarily quantitative approach to that question is accessible to lots of people who want to try and help with AI safety. And similarly for amplification.So I think in both of those areas, there’s questions that could be studied from a very computer science perspective and involve software engineering and involve running experiments. And is also can be studied from a more philosophical perspective. Just thinking about the questions and about what we really want and alignment works.They can also be studied from this more psychology perspective of actually engaging. Like some of them are going to run relatively large scale experiments involving humans. I don’t know if things are … like if the time is right for that. But that’s definitely, there’s definitely experiments in the space that do seem valuable. And it seems like that at some point in the future there’s going to be more of them."} {"prompt": "Sorry, what do you mean by that?", "completion": "So if you ask how does this kind of decomposition work or how do these kinds of debates work? Like the decomposition is ultimately guided by … Right, so I originally described this process involving a human and a few AI assistants. Ultimately you want to replace that human with an AI that’s predicting what a human would do. But nevertheless, the way that you’re going to train that system or the way we currently anticipate training that system involves a ton of interaction. Like I’m assuming there’s really just imitating or maximizing the approval of some human who’s running that process. And so, in addition to caring about how machines work you care a ton about how does that process work with actual humans? And how can you collect enough data from humans to … How can you cheaply collect enough data from humans that you can actually integrate this into the training process of powerful AI systems.So I don’t think that’s a fact about … That doesn’t bear on many of the traditional question psychology and maybe that’s a bad thing to refer to it as. But it is like a … It involves studying humans. It involves questions about particular humans and about how humans behave. About how to effectively or cheaply get data from humans. Which are not really … They’re questions machine leaner people have to deal with because we also have to deal with humans. But really it’s like a much larger … Machine leaner people are not that good at dealing with the interaction with humans at the moment.So yeah. So that’s some family of questions. I think the ones I’m most excited about are probably more on the philosophical computer science bent. There are lots of people who wouldn’t be a great fit for working in the ML. Who wouldn’t be great for working on those questions.I think also stepping back further, setting aside the amplification of debate, I think there’s just still a lot of very big picture questions about how do you make AI safe? That is you could focus on some particular proposal, but you could also just consider the process of generating additional proposals or understanding the landscape of possibilities, understanding the nature of the problem. I don’t know if you’ve ever had anyone from MIRI on, but I’m sure they would advocate for this kind of work. And I think that’s also … I consider that pretty valuable.Probably I’m more excited about, at the moment, about pushing on our current list promising proposals. Since I spent a bunch of time thinking about alternatives and it doesn’t seem as great to me. But I also think there’s a lot of value to clarifying our understanding of the problem. More like trying to generate totally different proposals. Trying to understand what the possibilities are like."} {"prompt": "Great, yeah. Well we’re planning to get someone on from MIRI on in a couple of months time. Perhaps when it fits better with their plans and they’re hoping to hire. So we’ll some synergies between having the podcast and them actually having some jobs available.", "completion": "That makes sense."} {"prompt": "So make it a little bit more concrete. What are Open AI’s hiring opportunities at the moment? And, in particular, I heard that you’re not just hiring ML researchers but also looking for engineers. So I was interested to learn kind of how they help with your work and how valuable those roles are compared to the kind of work maybe that you’re doing?", "completion": "I think there’s a sort of spectrum between … Yeah there’s a spectrum between research and engineering. Or like most people at Open AI don’t sit in either extreme of that spectrum. So most people are doing some combination of thinking about more conceptual issues in ML, and running experiments, and writing code that implements ideas, and them messing with that code thinking about how it works, like debugging.Yeah there’s a lot of steps in this pipeline that are not that cleanly separated. And so, I think there’s value on the current margin from all the points on this spectrum. And I think actually at the moment, right now, I think I’m still spending or the safety team is still spending a reasonably large … Like even people who are nominally very far on the research end are still spending a pretty large fraction of their time doing things that are relatively far towards engineering. So spending a lot of time setting up and running experiments, getting things working.Again, the spectrum between engineering and research is I think not that clean. Or ML is not in really a state where it’s that clean. So I think right now there’s a lot of room for people who are more at the engineering side. I think what I mean by more at the engineering side is people who don’t have a background doing research in ML, but do have a background doing engineering. And who are interested in learning about ML and willing to put in some time on the order of months, maybe. Like getting more experienced thinking about ML, doing engineering related to ML. I think there’s a lot of room for that.Mostly … So I mentioned these three problems. The first problem was actually getting the engineer experienced to make say amplification or debate work at scale. I think that involves a huge amount of getting things to work. Sort of by the construction of the task. And similarly in the third category of trying to push safety out far enough that it’s engaging with … that ML could actually be interacting in an interesting way with human cognition. I think that also involves again pushing things to a relatively large scale. Doing some research or some work that’s more similar to conventional machine learning work rather than being safety in particular.I think both of those problems are pretty important. And both of them require … like are not that heavily weighted towards very conceptual machine learning work. I think my current take, like I currently consider a second category of work as figuring out from a conceptual perspective is this a good scheme to do? Seems like the most important stuff to me, but also seems like very complimentary with the other two categories in a sense of our current philosophy which I’m pretty happy with, is like we actually want to be building new systems and starting to run experiments on them in parallel with thinking about does this scheme … Like what are the biggest conceptual issues for some combination of the experiments can also keel off … Like even if the conceptual stuff work, if the experiments don’t that’s like another reason that the thing can be a non starter. And second, that who can run a bunch of experiments that actually give you a lot of evidence about … Like help you understand the scheme much better.And obviously, independent of the complimentarity. Actually being able to implement these ideas is important. Like there’s obviously complimentarity between knowing whether x works and actually having the expertise there able to implement x. Right? The case that we’re aiming at is the cases where we have both developed a conceptual understanding of how you can build blind AI, but that have actually developed teams and have groups that are understand that and are trained to actually put it into practice in cases where it matters. So we’d like to aim towards the one where there’s a bunch of teams that are able to … Yeah that are basically able to apply cutting edge ML to make AI systems that are aligned rather than unaligned.That’s, again, harking back to the very beginning of our discussion, we talked about these two functions of safety teams. I think the second function of actually make the AI blind is also an important function. Obviously, it only works if you’ve done the conceptual work. But also, the conceptual work is also realistically the main way that’s going to be valuable is if there are teams that are able to put that into practice. And that problem is, to a significant extent, an engineering problem."} {"prompt": "Just quickly, do you know what vacancies Open AI has at the moment?", "completion": "So I guess on the safety team, yeah … I mostly think about safety team. On the safety team, we are both very interested in hiring ML researchers who have a background in ML research. Like who have done that kind of work in the past or have done exceptional work in nearby fields and are interested in moving into ML. We’re also pretty interested in hiring ML engineers. That is people who have done engineering work and are maybe interested in learning, or have put in some amount of time. So ideally these are people who are either exceptional at doing engineering related to ML or are exceptional at engineering and have demonstrated that they’re able to get up to speed in ML and are now able to do that quality work.And again, those roles are not … In terms of what they involve, there’s not a clean separation between them. It’s basically just a spectrum. Yeah, there’s several different skills that are useful. We’re really looking for all of those skills. Like the ability to build things, the ability to do engineering, the ability to do those parts of engineering that are distinct to ML. The ability to reason about safety, the ability to reason about ML. Both of those at a conceptual level.So the safety team is currently looking for the entire spectrum of stuff we do. I think that’s probably the case in several other teams within the organization. That is, the organization is large enough, there’s like a bunch of places now that given a particular skill set … Well, again, given any particular skill set on that spectrum, there’s probably a place. The organization overall is not that large. We’re at the scale of 60 full-time people, I think. So there’s still a lot of roles that don’t really exist that much. Like that would at a very large company. But there’s a lot of engineering to be done. There’re a lot of conceptual work to be done. And a lot of the whole space in between those."} {"prompt": "Yeah. How does work at Open AI compare to DeepMind and other top places that people should have at the forefront of their brains?", "completion": "You mean in terms of my assessment of impact? Or in terms of the experience day to day?"} {"prompt": "I think in terms of impact, mostly.", "completion": "Yeah. I don’t think I have a really strong view on this question. I think it depends in significant part on things like where you want to be and which particular people you’re most excited about working with. I guess those are going to be two biggest inputs. Yeah, I think that both teams are doing reasonable work that accelerates safety. Both teams are giving experience implementing things and understanding how you can be integrated into an AI project.I’m optimistic that over the long run, there will be some amount of consolidation of safety work at wherever happens to be the place that is designing the AI systems for which it’s most needed."} {"prompt": "Awesome. A question that quite a few listeners wrote in with for you was how much people who were concerned about AI alignment should be thinking about moving into computer security in general? And what’s the relationship between computer security and AI safety?", "completion": "I think it’s worth distinguishing two relationships between security and alignment. Or like two kinds of security research. So one would be security of computer systems that interface with or are affected by AI. So this is kind of like the conventional computer security problem, but now in a world where AI exists. Or maybe you’re like even aren’t focusing on the fact that AI exist and are just thinking about conventional computer security. So that’s one class of problems.There’s a second class of problems which is the security of ML systems themselves. Like to what extent can an ML system be manipulated by an attacker or to what extent does an ML system continue to function appropriately in an environment containing an attacker. So they have different views about those two areas.So on the first area, computer security broadly, I think my current feeling is that computer security is quite similar to other kinds of conflict. So that is, if you live in a world where it’s possible to attack. You know, someone’s running a web server. It’s possible to compromise that web server. Like a bunch of people have computers. It’s possible to effectively steal resources from them or to steal time on their computers. That’s very similar to living in a world where it’s possible to take a gun and shoot people. And I like regret, in general, I love it if there are fewer opportunities for destructive conflict in the world. Like it’s not great if it’s possible to steal stuff or blow stuff up or so on.But from that perspective, I don’t think computer security is like … I think the core problem in the AI alignment, like the core question is can we build AI systems that are effectively representing human interests? And if the answer is no, then there are enough forms of possible conflict that I think we’re pretty screwed in any case. And if the answer is yes, if we can build powerful AI systems that are representing human interests, then I don’t think cyber security is a fundamental problem anymore than the possibility of war as a fundamental problem. Like it’s bad. It’s perhaps extremely bad, but we will able … At that point, the interaction will be between AI systems representing your interests and AI systems representing someone else’s interests or AI systems representing no one’s interest.And, at that point, I think the situation is probably somewhat better than the situation is today. That is I expect cyber security is less of a problem in that world than it is in this world if you manage to solve alignment. So that’s my view on computer security that’s not conventional computer security and how alignment interfaces it with it. I think it can be … I basically think quantitatively, computer security can become somewhat more important during this intermediate period. Or AI is especially good at certain kinds of attacks and maybe not as useful. Like it may end up not being as useful for defense. And so one might want to intervene on making AI systems more useful for defense. But I think that doesn’t have outsize utilitarian impact compared to other cause areas in the world.I think security of ML systems is a somewhat different story. Mostly because I think security of ML systems … like intervening on security of ML systems seems like a very effective way to advance alignment, to me. So if you ask how are alignment problems likely to first materialize in the world? Like supposing that I have built some AI system that isn’t doing exactly the thing that I want. I think the way that that’s likely to first show up is in the context of security.So if I build like a virtual system that’s representing my interests on the internet, it’s like a little bit bad if they’re not exactly aligned with my interests. But in a world containing an attacker, that becomes catastrophically bad often. Because an attacker can take that wedge between the values of that system and my values and they can sort of create situations that exploit that difference.Right. So, for example, if I have an AI that doesn’t care about some particular fact. Like it doesn’t care about the fact that it uses up a little bit of network bandwidth whenever it sends this request. But I would really care about that because I wouldn’t want to keep sending requests arbitrarily. So an attacker can create a situation where my AI is going to become confused and because it isn’t attending to this cost. An attacker is motivated to create a situation where the AI will therefore pay a bunch of the cost. So motivated to trick my AI that doesn’t care about sending messages into sending very very large numbers of messages.Or like if my AI normally behaves well, then there exists this tiny class of inputs with very very small probability and encounters an input that causes it to behave maliciously. And that will appear eventually in the real world, perhaps. And that’s part of, sort of, the … part of the alignment concern. Is that that will appear naturally in the world with small enough probability. Or as you run AI long enough. But it will definitely first appear when an attacker is trying to construct a situation in which my AI behaves poorly. So I think security is this interesting connection where many alignment problems, not literally all, but I think a majority you should expect to appear first as security problems. And as a result, I think security is sort of one of the most natural communities to do this kind of research in."} {"prompt": "When you say an attacker would try to do these things, what would be their motivation there?", "completion": "Honestly it would depend on exactly what AI system it is. But a really simple case would be if you have a virtual assistant going out to make purchasing decisions for you. The way it makes its decisions is slightly wrong. There are a thousand agents in the world, there are a thousand people who would like the virtual assistants to send them some money. So if it’s possible to manipulate the decision it uses for deciding where to send money, then that’s a really obvious thing to try and attack.If it’s possible to cause it to leak your information. So suppose you have an AI which has some understanding of what information … of what your preferences are, but doesn’t quite understand exactly how you regard a privacy. There’s ways of leaking information that it doesn’t consider a leak. Because it has an almost, but not completely, correct model of what constitutes leaking. Then an attacker can use that to just take your information by setting up a situation where the AI doesn’t regard something as a leak, but it is a leak.There’s any difference between what is actually bad and what you’re AI considers bad. And an attacker can come in and exploit that difference if there’s some action that would have a cost to you and it benefits the attacker. Then the attacker wants to set things up so that your AI system is not recognizing the cost to you.So taking money, taking information, using your computer to launch other malicious activities. Like run denial of service. Just causing destruction. Like there’s some fraction of attackers who just want to run denial of service stacks. So if you can compromise integrity of a bunch of AI systems people are using, that’s a bummer.Maybe they want to control what content you see. So if you have AI systems that are mediating how you interact with the internet. You know your AI says, here’s something you should read. There are tons of people who would like to change what your AI suggests that you read just because every eyeball is worth a few cents. If you can deploy that at scale, it’s like a lot of cents.So that’s the kind of situation where some of those problems aren’t alignment. There’re a lot of security problems that aren’t alignment problems. But I think it’s the case that many many alignment problems are also security problems. So if one were to be working in security of ML with an eye towards working on those problems that are also alignment problems, I think that’s actually a pretty compelling thing to do from a long term AI safety perspective."} {"prompt": "So, it seems to me like AI safety is a pretty fragile area where it will be possible to cause harm by doing subpar research or having the wrong opinions or giving the wrong impression, being kind of a loudmouth who has not terribly truth-tracking views. How high do you think the bar is for people going into this field without causing harm? Is it possible to be kind of at the 99th or 99th.9th percentile of suitability for doing this but still on balance: not really do good because the kind of unintentional harm that you do outweighs the positive contribution that you made?", "completion": "So, I do think the current relationship between the AI alignment community or safety community and the ML community is a little bit strange in that you have this weird situation with a lot of external interests in safety and alignment. We have a lot of external funding, a lot of people on the street. It sort of sounds like a compelling concern to them that causes a lot of people in machine learning to be kind of on the defensive. That is, they see a lot of external interest that’s often kind of off-base or doesn’t totally make sense. They’re concerned about policies that don’t make sense or diversion of interest from issues they consider important to some incoherent concerns.So, that means, again, they’re a little bit on the defensive in some sense, and as a result, I think it’s kind of important for people in the field to be reasonably respectful and not causing trouble, because there’s more likely in most context to actually cause a sort of hostile response. I don’t know if that’s much of a property of people. I think someone who believed that this was an important thing, if you’re at the point where you’re like, “Yep, I’m really concerned about causing political tension or really rocking the boat-”"} {"prompt": "That’s not a good sign.", "completion": "Yeah, when they hit that point, if you’re at that point and you’re basically behaving sensibly, then I think things could probably be okay. I’ve definitely sometimes … I have from time to time caused some distress or run into people who are pretty antagonistic towards something I was saying, but I mostly think if you care about it a lot and are being sensible, then I’d be like very surprised if the net effect was negative. I think a lot of people don’t care about it very much. They would disagree with this position and say that, “Look, the reason people are antagonistic is not because they’re being reasonably concerned about outsiders who don’t have a clear understanding pushing bad policies.” The reason that they’re defensive is just ’cause they’re being super silly, and so it’s just time for a showdown between people who are being silly and people who have sensible views.And like, if you’re coming in with that kind of perspective, then, presumably this question’s not interesting to you because you’re just like, “Yeah, Paul’s just one of the silly sympathizers.” It’s not clear that I’m allowed to give recommendations to people like that or that they would … it’s not clear that they would be interested in the recommendations. I would recommend just as part of a compromise perspective, if you have that view, then there exists other people like Paul who have a different view on which, like, there are some reasonable concerns everyone wants to behave somewhat respectfully towards those concerns. It’ll be good if we all compromised and just didn’t destroy things or really piss people off.If you couldn’t have worked on AI…"} {"prompt": "So, if we imagine you and your colleagues and people who are kind of similar to you in other organizations before you got in AI safety but you had your skills and talents and interests, but I would say that you can’t work on AI safety, what do you think you should have done otherwise?", "completion": "Yeah, so by can’t work on AI safety, you mean let us ignore all of the impacts of my work via the effect on … a natural thing that I might have done would be go into AI, and I’m like, “AI seems important independent of alignment. It seems like AI’s reasonably likely,” as a person with a sort of technical background, it kind of seemed, especially in the past … this is more obvious ’cause I’ve neglected this argument in the past. But, it seems like there’s a good ratio of effect of the area to congestion or number of people trying to work on it, and it was a good match for my comparative advantage."} {"prompt": "Yeah, let’s maybe set that aside as well ’cause it’s pretty similar.", "completion": "Yeah, that seems not in the spirit of the question. Yeah, so setting aside all of AI … and let’s set aside everything that’s having an effect via the overall importance of AI. I am pretty excited about overall improving human capacity to make good decisions, make good predictions, coordinate well, etc., so I’m pretty excited about that kind of thing. I think it would be a reasonable bet, so that includes both stuff like … some of these things aren’t a good fit for my comparative advantage so it’s probably not what I should do. Examples of things that aren’t a good fit for my comparative advantage are: understanding pharmacological interventions to make people smarter, understanding … just having a better map of determinants of cognitive performance. “How can you quickly measure cognitive performance? What actually determines how well people do at complicated messy tasks in the real world,” so that you can intervene on that.I think that’s an area where science can add a really large amount of value. It’s very, very hard for a firm to add value in that space compared to a scientist ’cause you’re just gonna discover facts and you’re not gonna be able to monetize them very well, probably. That’s an example of proving human capacity in a way that I probably wouldn’t have done because it’s not a great fit for my abilities. Things that are a better fit for my abilities are like stuff that’s more about what sort of institutions or mechanisms do you use? I don’t know if I would have worked on that kind of thing. I might have. So, an example of a thing I might work on is like-"} {"prompt": "A little bit more law and economics or-", "completion": "Yeah, so an example of a thing that I find very interesting is the use of decision markets for collective decision making. And so, that’s an example of an area that I would seriously consider and I think there’s a lot of very interesting stuff you can do in that space. It’s not an area I’ve thought about a huge amount because it seems like significantly less high-leverage than AI, but it is like a thing which I think there’s a lot more mathematical work to do, and so if you’re avoiding AI and you’re like, “We’re just math, really,” I’m almost certainly going to be working in some area that’s very, very similar to theoretical computer science in terms of what skills it requires."} {"prompt": "I guess, yeah, are there other key questions in that field that stand out as being particularly important in maths, computer science, other than AI-related things?", "completion": "So, definitely, most of the questions people ask, I think, are, if they’re relevant at all, primarily relevant through an effect on AI, so I don’t know how much, exactly. I mean, I took those off the table. Maybe that was too much. I think the basic problem is if you really care about differential progress, effective altruists tend to have this focus on, “It doesn’t matter if we get somewhere faster. It mostly matters what order technologies are developed in or what trajectory we’re on.” I think really a lot of the things people work on are like … a lot of things people work on in math or computer science are like founded on this … based on this principal, “We don’t know how X is going to be helpful, but it’s going to be helpful in some way,” which I think is often a valid argument, but I think is not helpful for differential progress, or like, you need a different flavor of that argument if you wanna say … It’s hard to say, “We don’t know how this is going to be helpful, but we believe it’s going to be helpful to things that are good.”"} {"prompt": "Specifically, yeah.", "completion": "Yeah, so I think a lot of stuff in math and computer science is less appealing from like a long-run altruist perspective because of that. I think stuff on decision making in particular, like, “What kinds of institutions do you …” I think I was very interested in and did work on my thesis was just like, “There’s this giant family of problems. You have N people. They like each have access to some local information and would like to make some decisions. You can formalize this in our problems in that space. They would like to decide what to produce and what to consume and what to build,” so I’m just asking those questions saying, “What are good algorithms that people can use?” So, I really am using the computer science question, yeah. I don’t know that much about these areas, but it’s very exciting kind of area."} {"prompt": "You may not have anything new to say about this one, but what would you say are the most important ways that people in the effective altruism community are approaching AI issues incorrectly?", "completion": "So, I think one feature of the effective altruism community is its past dependence on founder effects, or people in EA who are interested in AI safety are often sort of very informed by this mirrored perspective for the very sensible reason the MIRI folk and Nick Bostrom were probably the earliest people talking seriously about the issue. So there’s like the cluster of things that I would regard as errors that would come with that. So like, some perspective on how you should think about sophisticated AI systems, so for example, very often thinking in terms of a system that has been given a goal.This is actually not a mistake that MIRI makes. This is a mistake many EAs. Make. Many EAs would think about an AI as being handed some goal, like an explicit representation of some goal, and the question is just how do we choose that specific representation of a goal such that pursuing it leads to good outcomes, which I think is an okay model of AI to work with sometimes, but it’s mostly not … certainly not a super accurate model, and most of the problems in AI alignment appear in that model. So, that’s like a kind of error … again, attributing that one to MIRI is somewhat unfair and that MIRI themselves wouldn’t make this error, but is a consequence of people-"} {"prompt": "It’s kind of a bastardized version of their view.", "completion": "That’s right. An analogous thing is: I think that the way you should be thinking, probably, about building safe AI systems is more based on this idea of corrigibility, ’cause there’s AI systems that are going along with what … helping people correct them, helping humans understand what they’re doing and overall, participating in a process that points in the right direction rather than attempting the communicate the actual, “What is valuable?” or having an AI system that embodies what humans intrinsically want in the long-run.So, I think that’s a somewhat important distinction and that’s kind of intuitively, if an ML person talks about this problem, they’re really going to be thinking about from that angle. They’re gonna be seen like, “Great, we want our AI to not kill everyone. We want it to help us understand what’s going on,” etc., and so sometimes EAs can relate the perspective of, “But consider the whole complexity of moral value and how would you communicate that to an AI?” I think that is like an example of a mismatch that’s probably mostly due to an error on the EA side, though it’s certainly the case that this concept, corrigibility’s a complicated concept and if you actually think about the mechanics of how that works, it’s like, really, there’s a lot more moving parts than the normal ML perspective kind of suggests.Or like, again, it’s not even really like current ML perspective. It’s like the knee jerk response of someone who’s been actually thinking about ML systems. I guess I have difference of views with … I think EAs often have maybe also founder factory reasons like … actually no, I think for complicated reasons, they tend to have a view where the development of AI is likely to be associated with both sort of very rapid changes and also very rapid concentration of power. I think the EAs overestimate the extent to which or the probability of that happening, so this is like, yeah, that’s certainly a disagreement between me and most EAs. I think it’s much more likely we’re gonna be in the regime where there’s reasonably broadly distributed AI progress and ais getting deployed a whole bunch all around the world.And, maybe that happens rapidly over the timescale of a year or two years that the world moves from something kind of comprehensive to something radically alien, but it’s not likely to be a year during which somewhere inside Google, AI’s being developed and at the end of the year, rolls out and takes over the world. It’s more likely to be a year during which just everything is sort of in chaos. The chaos is very broadly distributed chaos as AI gets rolled out."} {"prompt": "Is it possible that there’ll be better containment of the intellectual property such that other groups can’t copy and one group does go substantially ahead? At the moment, almost all AI research is published publicly such that it’s relatively easy to replicate, but that may not remain the case.", "completion": "Yeah, so I think there’s definitely this naïve economic perspective on which this would be incredibly surprising, namely if … so, in this scenario where AI’s about to take over the world, then … and it’s driven primarily by progress in the AI technology rather than controlled large amounts of hardware, then that intellectual property now … the market value, sort of market to market would be like ten trillion dollars or whatever, so you sort of expect an actor who is developing that … the amount of pressure, competition to develop that would be very large. You expect very large coalitions to be in the lead over small actors.So, it wouldn’t … Google’s not quite at the scale where they could plausibly do it. You could imagine sort of if all of Google was involved in this project, that becomes plausible, but then again, you’re not imagining a small group in a basement. You’re imagining an entity, which was already producing on the order of like … was already valued on the order of multiple trillions of dollars taking some large share of its resources into this development project.And, that’s kind of conceivable. The value of Google going from five trillion dollars to a hundred trillion dollars, that’s a huge jump. It’s 20x in value, a hundred trillion dollars being your value if you take over the world. The 20x is a huge jump, but that’s kind of in the regime of what’s possible, whereas I think a billion dollars taking over the world is just super implausible. There’s an economic perspective which makes that prediction very confidently. To compare that to the real world, you have to think about a lot of ways in which the real world is not like an idealized simple economic system, but I still think it will be the case that probably AI development will involve very large coalitions involving very large amounts of hardware, large numbers of researchers regardless of if intellectual property is contained really well, then it might take place within a firm or a tightly coordinated cluster of firms rather than distributed across the academic community.In fact, I would not be at all surprised if the academic community didn’t a super large role, but then the distinction is between distributed across a large number of loosely coordinated firms vs distributed across a network of tightly coordinated firms, and like, in both cases, it’s a big group. It’s not like a small group being covert. And, once you’re in the regime of that big group, then yeah, probably what ends up happening there is like the price … so, if it’s like Google’s doing this, unless they’re in addition to being really tight about IP, also really tight about what they’re doing, you see the share price of Google start growing very, very rapidly in that world, and then probably, as that happens, eventually you start running into problems where you can’t scale markets gracefully and then policymakers probably become involved.At the point in the market is staying, Google is roughly as valuable as everything else in the world. Everyone is like, “Geez, this is some serious shit.” Google’s an interesting case, actually, ’cause corporate governance at Google is pretty poor, so Google has this interesting property where it’s not clear that owning a share of Google would actually entitle you to anything if Google were to take over the world. Many companies are somewhat better governed than Google in this respect."} {"prompt": "Explain that.", "completion": "So, Google is sort of famous for shareholders having very little influence on what Google does, so if Google hypothetically were to have this massive windfall, it’s not really clear … it would be kind of a complicated question what Google as an organization ends up doing with that windfall, and Google seems kind of cool. I like Google. They seem nice, probably like they’d do something good with it, but it’s not obvious to me that being a shareholder in Google then gives you-"} {"prompt": "You don’t get the dividend? You could sell the shares or-", "completion": "You get the dividend, but it’s not clear whether it would be a dividend. For example, most shares that are sold on Google-"} {"prompt": "You’re saying there’s a possibility of retaining the earnings to just invest in other things and it never gets handed back-", "completion": "Yeah, they’d build some Google city, more Google projects."} {"prompt": "Interesting.", "completion": "In particular, most shares of Google that are traded are non-voting shares, I think. I don’t actually know very much about Google’s corporate governance. They’re sort of famous-"} {"prompt": "There’s two classes, I think, yeah.", "completion": "So, I believe a majority of voting shares are still held by like three individuals. So, I think the shareholders don’t have any formal power in the case of Google, essentially. There’s a question of informally, there’s some expectations, and again, if you’re taking over the world, formal mechanisms are probably already breaking down."} {"prompt": "There’s also plenty of surplus to distribute.", "completion": "Well, yeah, that depends on what you care about. So, from the perspective, in general, like, as AI’s developed, from the perspective of humans living happy lives, there’s sort of massive amounts of surplus. People have tons of resources. From the perspective of if what you care about is relative position or owning some large share of what is ultimately out there in the universe, then there’s, in some sense, there’s only one universe to go around, and so people will be divvying it up.So, I think the people who are most interested in living happy lives and having awesome stuff happen to them and having their friends and family all be super happy, those people are all just gonna be really satisfied and it’s gonna be awesome, and the remaining conflict will be amongst either people who are very sort of greedy in the sense that they just want as much stuff as they can have, or states that are very interested in insuring the relative prominence of their state, things like that. Utilitarians, I guess, are one of the offenders here. A utilitarian wouldn’t be like, “Great, I got to live a happy life.” Utilitarians like-"} {"prompt": "They have linear returns to more resources, more than most people do, yeah. I guess any universalist moral system may well have this property, or actually not necessarily, but most of them.", "completion": "Yeah, I think a lot of impartial values generally have, yeah."} {"prompt": "Another blog post you wrote recently was about how valuable it would be if we could create an AI that didn’t seem value aligned, whether that would have any value at all or whether it would basically mean that we’d gotten 0 value out of the world. Do you want to explain what your argument was then?", "completion": "Yeah, so I think this is a perspective that’s reasonably common in the ML community and in the broader academic world or broader intellectual world, namely, you build some very sophisticated system. One thing you could try and do is you could try and make it just want what humans want. Another thing you could do is you could just say, “Great, it’s some very smart system that has all kinds of complicated drives. Maybe it should just do its own thing. Maybe we should be happy for it, you know, the same way that we think humans are an improvement over bacteria, we should think that this AI we built is an improvement over humans.”"} {"prompt": "Should live its best life.", "completion": "Yeah, so I think it’s not an uncommon perspective. I think people in the alignment community are often pretty dismissive of that perspective. I think it’s a really hard. I think people on both sides, both people who sort of accept that perspective intuitively and people who dismiss that perspective, I think haven’t really engaged with how hard a moral question that is. Yeah, I consider it extremely not obvious. I am not happy about the prospect of building such an AI just ’cause it’s kind of an irreversible decision, or handing off the worlds to this kind of AI we built somewhat irreversible decision."} {"prompt": "It seems unlikely to be optimal, right?", "completion": "Yes. I guess I would say half as good. If it’s half as good as humans doing their thing, I’m not super excited about that. That’s just half as bad as extinction. Again, trying to avoid that outcome, it’d be half as important as trying to avoid extinction, but again, the factor of two’s not going to be decisive. I think the main interesting question is, “Is there such an AI you could build that would be close to optimal?” And I do agree that a priori, most things aren’t going to be close to optimal. It’d be kind of surprising if that was the case. I do think there are some kinds of AIs that are very inhuman for which it is close to optimal, and understanding that border between when that’s very good, like when we should, as part of being a cosmic citizen, should be happy to just build the AI vs when that’s a great tragedy.It’s important to understand that boundary if there’s some kind of AI you can build that’s not aligned that’s still good, so in that post I both made some arguments for why there should be some kinds of AIs that are good despite not being aligned, and then I also tried to push back a little bit against the intuitive picture some people have that is the default."} {"prompt": "Yeah, so I guess the intuitive picture in favor is just, “It’s good when agents get what they want and this AI will want some things, and then it will go about getting them, and that’s all for the good,” and the alternative view would be, “Well, yes, but it might exterminate life on earth, and then fill the universe with something like paperclips or some random thing that doesn’t seem to us like it’s valuable at all, so what a complete waste that would be.” Is that about right?", "completion": "That’s definitely a rough first pass. That’s basically right. There’s definitely a lot that can be said on the topic. For example, someone who has the favorable view could say, “Yes, it would be possible to construct an agent which wanted a bunch of paperclips, but such an agent would be unlikely to be produced, so you’d have to go out of your way.” In fact, maybe the only way to produce such an agent is if you’re really trying to solve alignment. If you’re just trying to run something like evolution, then consider the analogy of evolution: humans are so far from the kind of thing that, yeah.So, one position would be, “Yeah, there exists such bad AIs,” but if you run something like evolution, you’ll get a good AI instead. So, that perspective might then be optimistic about the trajectory of modern ML that is, on some alignment perspectives, you’re like, “Well, this is really terrifying. We’re just doing this black box optimization. Who knows what we’re going to get?” From some other perspectives, you’re like, “Well, that’s what produced humans, so we should pay it forward.” I think, also, people get a lot of mileage out of the normal analogy to descendens. That is, people say, “Well, we would have been unhappy had our ancestors been really excited about controlling the trajectory of our society and tried to ensure their values were imposed on the whole future,” and likewise, even if our relationship to AI systems we built is different than the relationship of our ancestors to us, it has this structural similarity and likewise, the AI would be annoyed if we went really out of our way and paid large costs to constrain the future structure of civilization, so maybe should be nice and do unto others as we would have them do unto us."} {"prompt": "I don’t find that persuasive, personally.", "completion": "I certainly don’t find it persuasive out of the box, yeah."} {"prompt": "It just seems very different. I guess, were very similar by design to humans from 500 years ago just with probably more information and more time to think about what we want, whereas I think you can’t just … yeah, an AI might just be so differently designed that it’s like a completely different jump, whereas from our point of view, it could be, “Well, yeah.”", "completion": "I think the more compelling … so, I don’t really lean much on this. I don’t take much away from the analogy to descendens. I think it’s a reasonable analogy to have thought about, but it’s not going to run much of the argument. I think the main reason that you might end up being altruistic towards, say, the kind of product of evolution would be if you said, “From behind the veil of ignorance, humans have some complicated set of drives, etc.” If humans go on controlling earth, if that set of values and preferences humans have is gonna get satisfied, if we were to run some new process that’s similar to evolution, it would produce a different agent with a different set of values and preferences, but from behind the veil of ignorance, it’s just as likely that our preferences would be the preferences of the thing that actually evolves on earth is that our set of preferences would be the preferences of this AI that got created. So, if you’re willing to step far enough back behind the veil of ignorance, then you might say, “Okay, I guess.”"} {"prompt": "50/50.", "completion": "And, I think there’s come conditions under which you can make that argument tight and so even a causal of perfectly selfish causal decision theorist would in fact for like these normal, weird acausal trade reasons would in fact want to let the AI … would be happy for the AI, and the question of: outside of those very extreme cases where there’s a really tight argument, that you should be happy, how happy should you be if there’s a loose analogy between the process that you ran and biological evolution?"} {"prompt": "So, what do you think are kind of the best arguments both for and against thinking that an unaligned or what seems like an unaligned AI would be morally valuable?", "completion": "So, I think it certainly depends on which kind of unaligned AI we’re talking about, and so one question is, “What are the best arguments that there exists an aligned AI which is morally valuable?” And another question is like, “What are the best arguments that a random AI is morally valuable?” Etc. So, I guess the best argument for the existence, which I think is an important place to get started, or if you’re starting from this dismissive perspective like most people in the alignment community have intuitively, I think the existence argument is a really important first step. I think the strongest argument on the existence perspective is: consider the hypothetical where you’re actually able to, in your computer, create a nice little simulated planet from exactly the same distribution as Earth, so you run Earth, you run evolution on it, you get something very different from human evolution, but it’s exactly drawn from the same distribution."} {"prompt": "You’d think it’s like 50/50 whether it’s likely to be better or worse than us, on average, right?", "completion": "Well, from our values, it might be … having conditioning now on our values, it might definitely be much worse."} {"prompt": "But conditioning of being agnostic about what values are good?", "completion": "That’s right, or yeah, it’s a really complicated moral philosophy question. Then, the extreme … I think we could even make it actually tighter. So, if you were to just make such a species and then let that go in the universe, I think then you have a very hard question about whether that’s a good deal or a bad deal. I think you can do something a little bit better. You can do something a little bit more clearly optimal, which is like if you’re able to create many such simulations, run evolution not just once, but many times, look across all of the resulting civilizations and pick out a civilization which is constituted such that it’s going to do exactly the same thing you’re currently doing, such that when they have a conversation like this, they’re like, “Yeah, sure. Let’s let out that … let’s just run evolution and let that thing prosper,” then kind of like now, the civilizations who follow the strategy are just engaged in this musical chairs game where each of them started off evolving on some worlds, and then they randomly simulate a different one of them in the same distribution, and then that takes over that world.So, you have exactly the same set of values in the universe now across the people who adopt this policy, just shuffled around. So, it’s clear that it’s better for them to do that than it is for them to say, face some substantial risk of building an aligned AI."} {"prompt": "Okay, so I didn’t understand this in the post, but now I think I do. The idea is that: imagine that there’s a million universes, all with different versions of Earth where life as evolved.", "completion": "If you’re willing to go for a really big universe, you can imagine they’re literally just all copies of exactly the same solar system on which evolution went a little bit differently."} {"prompt": "And so they all end up with somewhat different values, and you’re saying … but if they’re all … if all of their values imply that they should just reshuffle their values and run a simulation and then be just as happy to go with whatever that spits out is what they seem to prefer, then all they do is kind of trade places on average. They all just … you all just end up with different draws from this broad distribution of possible values that people can have across this somewhat narrow but still broad set of worlds? But, you’re saying this is better because they don’t have to worry so much about alignment? Oh, you mean why are things better after having played … after having. Yeah, why does the musical chairs thing where everyone just flips values on average with other people produce a better outcome in total?", "completion": "Yeah, so I think this is most directly relevant as an answer to the question, “Why should we believe there exists a kind of AI that we would be as happy to build as an aligned AI even though it’s unaligned?” But in terms of why it would actually be good to have done this: the natural reason is, we have some computers. The concerning feature of our current situation is that human brains are not super … we have all these humans. We’re concerned that AIs running on these computers are going to be better than humans such that we’re sort of necessarily going to have to pass control over the world off to things running on computers.So, after you’ve played this game musical chairs, now the new residents of our world are actually running on the computers, so now, as if you got your good brain emulations for free, that is now, those people who have access to simulations of their brain can do whatever it is they would … whatever you would have done with your AI, they can do with themselves. Yeah, there’s really a lot of moving parts here and a lot of ways this maybe doesn’t make any sense."} {"prompt": "Okay, let me just … so, if we handed it off, if we handed off the future to an AI that was running a simulation of these worlds and using that as its reference point for what it should value, on average, from this very abstracted point of view, this would be no worse, and, if all of the people in this broad set did this, then they would save a bunch of trouble trying to get the AI to do exactly what they want in that universe. They could all just kind of trade with one another, or they all get to save the overhead of trying to make the AI align with them specifically, and so they have to align it to some other pole that they’ve created of, yeah, some evolutionary process that listens to inside the computer?", "completion": "And the concern is presumably not the overhead, but rather the risk of failure. That is, if you think there’s a substantial risk you would build the kind of AI which is not valuable, then this would be … that’s our current state. We might build an AI that does something no one wants. We could instead build an AI that does something that we want. Maybe a second, a third alternative, which is the same as the good outcome between those two is just build an AI that reflects values that are the same, from the same distribution of values that we have."} {"prompt": "Okay, so you try to align it with your values, and if you fail, I’d think, “Well, there’s always this backup option and maybe it’ll be valuable anyway.”", "completion": "This is definitely plan B, and so it’d mostly be relevant … and again, to be clear, this weird thing with evolution is not something that’s going to get run because you can’t sample from exactly the same distribution as evolution. It would just prompt the question, “What class of AIs have this desirable future?” given that you believe at least one does, and yeah, it would be a plan B.So the reason to work on this moral question, “What class of AIs are we happy with despite not being aligned with us?” And the reason to work on that moral question would be that if you had a reasonable answer, that … it’s an alternative to doing alignment. If we had a really clear answer to that question, then we could be okay anyway even if we mess up alignment."} {"prompt": "Okay, so this would be a … yeah, I see. It would be an alternative approach to getting something that’s valuable even if it’s not aligned in some narrow sense with us.", "completion": "Yeah."} {"prompt": "And it might be an easier problem to solve, perhaps.", "completion": "That’s right. At least, people have not … on my list of moral philosophy problems, it’s like my top rated moral philosophy problem. I think not that many people have worked on it that long."} {"prompt": "So, if you were a moral realist, you’d just believe that there are objective moral facts.", "completion": "They should be totally fine with this kind of thing. From their perspective, why think that humans are better at discovering objective moral facts than … actually, I don’t know moral realist positions very well, but my understanding, some moral realists would go for that."} {"prompt": "But, I guess they might look at humans and say, “Well, I do just think that we’ve done better than average, or better than you would expect at doing this.” For example, “We care about this problem to begin with, whereas many other agents just might not even have the concept of morality,” so in that sense, we’re in the top half: maybe not the very top, but I wouldn’t roll the dice completely again, but then it seems like they should also then think that there’s a decent chance. If we did okay, it suggest that there’s a decent chance that if you roll the dice again, you’d get something somewhat valuable, because it would be an extraordinary coincidence if we managed to do really quite well at moralism, figuring what these moral facts are if it was extremely improbable for that to happen to begin with.", "completion": "Yeah, I mean, it’s definitely: if you’re a moral realist, you’re going to have different views on this question. It’s going to depend a lot on what other views you take on a bunch of related questions. I’m not super familiar with coherent moral realist perspectives, but on my kind of perspective, if you make some moral errors early in history, it’s not a big deal as long as you are on sort of the right path for the spectrumal deliberation, so you might think from the realist perspective, there’d be a big range of acceptable outcomes and you could in fact be quite a bit worse than humans as long as you were again, on this right path of spectral deliberation. I don’t quite know how moral realists feel about deliberation. Would they say there’s a broad … yeah, I think there’s probably a lot of disagreement amongst moralists and it’s just not a-"} {"prompt": "But then if you’re a total subjectivist, do you think there’s nothing that people ought to think is right? Instead, you just kind of want what you want? Why do you care at all about what other people in different hypothetical runs of evolutions would care about? Wouldn’t you just be like, completely, well, “I don’t even care what you want. All I care is about what I individually want and I just wanna maximize that.”", "completion": "Yeah, and so then you get into these decision theoretic reason to behave kindly. The simplest pass would be, from behind, if you could have made a commitment before learning your values to act in a certain way, then that would have benefited your values and expectation. So, similarly, if there are logical correlations between a decision and the decisions of others with different values, then that might be fine. Even on your values, it might be correct for you to make this decision because it correlates with this other decision, like a decision made by others.In the most extreme case … at some point, I should caveat this entire last however many, 10 minutes, 15 minutes of discussion as like, “This is a bunch of weird shit. It’s doesn’t reflect my behaviour as an employee of open AI like I do normal stuff making AI good for humans.” Anyway, then you get into weird shit where, once you’re doing this musical chairs game, then one step of that was you ran a bunch of simulations and saw which ones you were inclined to participate in the scheme you’re currently running, and so from that perspective, us as humans, we’d be like, “Well, we might as well be in such a simulation,” in which case, even on our neural values, by running this scheme, we’re going to be the ones chosen to take over the outside world."} {"prompt": "Why are you more likely to be chosen to … or go into the outside world if you’re cooperative?", "completion": "So, the scheme which would run, if you wanted to do the musical chairs thing, you can’t just simulate a random species and let it take your place because that is then just gonna move from those species that run this procedure, they’re all gonna give up their seat, and then the random species are gonna replace them."} {"prompt": "So you end up … it’s like evolutionarily a bad strategy.", "completion": "That’s a bad strategy. The thing that might be an okay strategy is you run the scheme and then you test for each species before you let them replace you. “Did they also run the scheme, and then if so-”"} {"prompt": "Choose from the cooperative ones, yeah.", "completion": "And then that would cause the incentives to be-"} {"prompt": "Yeah, I think this does get a bit weird once we’re talking about the simulations.", "completion": "Oh, it’s super weird. It’s super weird. I think the earlier parts were more normal."} {"prompt": "Yeah, the question of just whether an AI would be morally valuable seems much more mainstream.", "completion": "I agree. I think it’s also more important. I think this weird stuff with simulations probably doesn’t matter whereas I think the question morally, “How valuable is it to have this AI which has values that are kind of like from some similar distribution to our values?” I think that’s actually a pretty important … I think it’s relatively common for people to think that would be valuable, and it’s not something alignment people have engaged with that much. It’s not a question, to my knowledge, that moral philosophers have engaged with that much: a little bit, but like not … I guess maybe they come from a different perspective than I would want to really tackle the question from, as is often the case with moral philosophers.I guess another point is that I’m also kind of scared of this entire topic in that I think a reasonably likely way that AI being unaligned ends up looking in practice is like: people build a bunch of AI systems. They’re extremely persuasive and personable because we’ve optimized them to … they can be optimized effectively for having whatever superficial properties you want, so you’d live in a world with just a ton of AI systems that want random garbage, but they look really sympathetic and they’re making really great pleas. They’re like, “Really, this is incredibly inhumane. They’re killing us after this or they’re selecting us to … imposing your values on us.”And then, I expect … I think the current way overall, as actual consensus goes is like really, to be much more concerned about people being bigoted or failing to respect the rights of AI systems than to be concerned the actual character of those systems. I think it’s a pretty likely failure more than something I’m concerned about."} {"prompt": "Interesting. I hadn’t really thought about that scenario. So, the idea is here: we create a bunch of AIs and then we kind of have an AI justice movement that gives AIs maybe more control, like more control over their world and more moral consideration. Then it turns out that while they’re very persuasive at advocating for their moral interests, in fact, their moral interests are, when they’re given moral autonomy, are nothing like ours, or much less than they seem.", "completion": "Then we’re back to this question which was unclear of how valuable … maybe that’s fine. I don’t actually have a super strong view on that question. I think in expectation, I’m not super happy about it."} {"prompt": "But by kind of arguing for the moral rights of AIs, you’re making the scenario more possible?", "completion": "I mostly think it’s gonna be … I strongly suspect there’s going to be serious discussion about this in any case, and I would prefer that there would be some actual figuring out what the correct answer is prior to becoming an emotionally charged or politically charged issue. I’m not super confident, to be clear, about anything we’re saying here. These are not like 80% views. These are 40% views. An example would be like: often when we talk about failure scenarios, I will talk about: there are a bunch of automated autonomous corporations that control resources, and they’re mastering resources that no human gets to choose for any purpose, and people’s response is like, “Well, that’s absurd. Legally, you’re just a machine. You have no right to own things. We’re gonna take your stuff.”That’s something that I don’t is that likely to happen. I suspect that to the extent lots of resources are controlled by AI systems, those AI systems will be, in the interest of preserving those resources, will make fairly compelling appeals for respecting their rights in the same way a human would if you were like … if all humans get around and, “Yeah, we’re just gonna take.” Just such terrible optics. It seems like so much not a thing that I expect our society to do, everyone just being like, “We’re going to take all of these actor’s resources. We just don’t think they have the right to self-determination.”"} {"prompt": "Interesting. It seems like the default to me, but maybe not. I guess the issue is that the AIs would be able to advocate for themselves without human assistance potentially in a way that a corporation can’t. A corporation is still made of people. Do corporations make an argument that, “I’m a separate entity and I deserve rights and should be able to amass resources that don’t go to shareholders?” The problem is there it’s controlled by shareholders so it ultimately bottoms out at people in some way, and AI doesn’t necessarily?", "completion": "I think it’s both the case that corporations do in fact have a level of rights that would be sufficient to run the risk argument so that if the outcome is the same as corporations, that would be sufficient to be concerned, but I also think that corporations are both … yeah, they do bottom out with people in a way that these entities wouldn’t, and that’s one of the main problems, and also they’re just not able to make persuasive arguments. That is, one, they’re not able to represent themselves well. They don’t have a nice ability to articulate eloquent arguments that could plausibly originate with this actual moral patient making the arguments.And then two, the actual moral case is more straightforward for corporations, whereas I think for AI’s there will actually be a huge amount of ambiguity. I think the sort of default, again from if you interact with people who think about these issues, some right now, if you talk to random academics who think about philosophy and AI or look at Hollywood movies that are somewhat less horrifying than Terminator, I think the normal thing would be, “Yeah, by default, we expect once such agents are as sophisticated as humans, they are deserving of moral considerations for the same kinds of reason humans are,” and it’s reasonably likely that people will deny them that moral consideration, but that would be like a terrible moral mistake.I think that’s like kind of the normal … not normal view, but that’s like if I were to try and guess where the community is heading or well it would end up, that would be my guess."} {"prompt": "Yeah, I guess I feel like AI’s probably would deserve moral consideration.", "completion": "I also agree with that, yes. That’s what makes the situation so tricky."} {"prompt": "That’s true, but then there’s this question of: they deserve moral consideration as to their … I suppose ’cause I’m sympathetic to hedonism, I care about their welfare-", "completion": "As do I. To be clear, I totally care about their welfare."} {"prompt": "As do I, yeah, as we all should, but I don’t necessarily then want them to be able to do everything … do whatever they want with other resources which is an … no, but I feel that way about other people as well, necessarily, right, that I want other people on Earth to have high levels of welfare, but that doesn’t necessarily mean I wanna hand over the universe to whatever they want.I just think it makes the character of this debate a lot more contentious. If you’re like, “Yeah, everyone agrees that there’s this giant class of individuals which is potentially reasonably large, which currently does some large fraction of labor in the world, which is asking for the right to self-determination and control of property and so on, and are also way more eloquent than we are,” it’s like, geez.We’ll give you the welfare that we think you should deserve, yeah.", "completion": "It doesn’t sound good."} {"prompt": "Yeah.", "completion": "I mean the main reason I think it’s plausible is we do observe this kind of thing with nonhuman animals. People are pretty happy til you’re pretty terrible to nonhuman animals. I think that-"} {"prompt": "But that’s another case where it’s like, for example, I think that we should be concerned about the welfare of pigs and make pigs’ lives good, but I wouldn’t then give pigs lots of GDP to organize in the way that pigs want, but the disanalogy there is that we think we’re more intelligent and have better values than pigs whereas it’s less clear that’d be true with AI. But, in as much as I worry that AI wouldn’t have good values, it actually is quite analogous, that.", "completion": "Yeah, I think your position is somewhat … the arguments you’re willing to make here are somewhat unusual amongst humans, probably. I think most humans have more of a tight coupling between moral concern and thinking that a thing deserves liberty and self-determination and stuff like that."} {"prompt": "And generalize, right. Do you think that they’re bad arguments? It flows more naturally from a hedonistic point of view than a preference utilitarian point of view. That seems to be maybe where we’re coming apart.", "completion": "Oh, no, I mean I also would be like, “Yep, I care about the welfare of lots of agents who I believe …” I believe it’s a terrible bad thing, but maybe not the worst thing ever if you’re mistreating a bunch of AI systems, ’cause I think they probably are at some point to be moral patients, but I would totally agree with you, though, I could have that position and simultaneously believe that it was like either a terrible moral error to bring such beings into existence or a terrible moral error to give them greater authority over what happens in the world.So, I think that’s a likely place for us to end up in, and I think the level of veer and carefulness in public discussion is not such that those kinds of things get pulled apart. I think it probably mostly gets collapsed into a general rah or boo or … I don’t know that much about how public opinion works, but I’d be happy to take simple bets on this."} {"prompt": "Well, there’s some selfish reasons why people would not necessarily give large amounts of GDP. You could imagine there’s groups that would say, “Well, we still want to own AIs, but we should treat them humanely.” I guess that doesn’t sound too good now that I say that out loud.", "completion": "I don’t think it’s gonna play well."} {"prompt": "It’s not gonna play, yeah.", "completion": "Also, I mean, there’s just such a strong concentrated interest that is like … most of the cases where this goes badly are cases where there’s a large power imbalance, but in the case we’re talking about, the most effective lobbyists will be AI systems, and it’s going to be this very concentrated powerful interest which cares a lot about this issue, has a plausible moral claim, looks really appealing. It seems kind of overdetermined, basically."} {"prompt": "Yeah, okay.", "completion": "This isn’t super important. This is mostly relevant, again, when people say things like, “No, it’s kind of crazy to imagine AI’s owning resources and doing their own thing.”"} {"prompt": "Owning resources themselves, yeah.", "completion": "And I think that is the default outcome, barring some sort of surprising developments and-"} {"prompt": "Okay, I’ve barely thought about this issue at all, to be honest, which perhaps is an oversight, but I need to think about it some more and then maybe we can talk about it again.", "completion": "I don’t think it’s that important an issue, mostly. I think, but like, details of how to make alignment work, etc., are more important. I just try to justify them by the additional argument that like, to the extent that you care about what these AI systems want, you really would like to create AI systems that are on the same page as humans. If you get to create a whole bunch of extra new agents, it could be great if you create a bunch of agents whose preferences are well-aligned with the existing agents, and it could be like you just create a ton of unnecessary conflict and suffering if you create a ton of agents who want very different things."} {"prompt": "Okay. So we’re almost out of time. But just a final few questions. So you’re not only working in this area, but you’re also a donor and you’re trying to support projects that you think will contribute to AI alignment. But it’s an area where there’s a lot of people trying to do that. There’s perhaps more money than people who can usefully take it. So I hear it’s somewhat challenging to find really useful things to fund that aren’t already getting funded. How do you figure out what to fund and would you mind mentioning some of the things that you donate to now?", "completion": "Yeah. So I think I would like to move towards a world where it’s easier to work on anyone who is equipped to do reasonable AI alignment work. Is able to do that with a minimal hassle. Including if they had differences of view with other organizations currently working in the space. Or if they’re not yet trained up and want to just take some time to think about the area and see if it works out well.I think there are definitely … they’re definitely people who are doing work who are interested in funding and I’ll say not doing crazy stuff. And so one could just, in order to inject more money, dip lower in that. Say look, we like previously … If we’re not really restricted by funding, then our bar ought not be like we’re really convinced this thing is great. Our bar should just be it looks like you’re sort of a sensible person who might eventually figure out what’s … You know, it’s like an important part of personal growth. Maybe this project will end up being good for reasons we don’t understand.So one can certainly dip more in that direction. I think that’s not all used up. I guess the stuff I funded in AI safety over the last year has been … The biggest thing was funding Ought. The next biggest was running this sort of open call for individuals working on alignment outside of any organization. Which has funded three or four people. Actually, I guess most recently, there’s a a group working on IRL under weaker rationality assumptions in Europe. And also supporting — Zvi Mowshowitz and Vladimir Slepnev are running an AI alignment prize, which I’m funding and a little bit involved in judging."} {"prompt": "Do you think other donors that are earning to give could find similarly promising projects if they looked around actively?", "completion": "I think it’s currently pretty hard in AI alignment. I think there’s potentially room for … Right, so I think it’s conceivable existing funders including me are being too conservative in some respects. And like you could just say look, I really don’t know if x is good. But there is a probable story where this thing is good. Or ensuring that the many people in the field had enough money that they could regrant effectively. Like many people in the conventional AI safety crowd say if they had enough money, they could regrant effectively and could do whatever they wanted.Yeah, unless you’re willing to get a little bit crazy, it’s pretty hard. I guess it also depends on what you’re … Yeah I think it depends a lot on what your bar is. I think if AI is in fact … Like if we’re on short timelines, then the AI interventions are still pretty good compared to other opportunities. And there might be some qualitative sense of this kind of feels like a longer shot or wackier thing than I would fund in most areas.So I think a donor probably has to be somewhat comfortable with that. Yeah there’s also some claims … Like I think MIRI can always use more money. I think there are some other organizations that can also use more money and it’s not something that I think about that much. In general, giving is not something I’ve been thinking about that much because I think it’s just a lot. It seems much better for me to personally be working on getting stuff done."} {"prompt": "Yeah. That sounds right. Well this has been incredibly informative and you’re so prolific that I’ve got a whole lot more questions. But we’ll have to save them for another episode in the future. But I’ll stick up links to some other things that you’ve written that I think listeners who have stuck with the conversation this far will be really interested to read.And yeah. You do write up a lot of your ideas in detail on your various blogs. So listeners who’d like to learn more, they’ll definitely have the opportunity to do so.Just one final question, speaking of the blogs that you write. About a week ago you wrote about eight unusual science fiction plots that you wish someone would turn into a proper book or a movie. And I guess they’re very hard science fiction, things that you think might actually happen and that we can learn from. So what do you think is wrong with current SciFi? And which was your favorite of the ideas that you wrote up?", "completion": "So I think a problem that I have and that maybe many similar people have is that it becomes difficult to enjoy science fiction as the world becomes less and less internally coherent and plausible. Like at the point when you’re really trying to imagine what is this world like? Like what is this character actually thinking? Like what would their background be? Often if you try and do that, I think almost all the time if you try and do that with existing science fiction, if you think too hard, eventually the entire thing falls apart and it becomes very difficult … you kind of have to do a weird exercise in order to not think too hard if you really want to sympathize with any of the characters. Or really even understand what’s … like think about what’s going on in the plot. I think it’s extremely common. It’s very, very rare to have any science fiction that doesn’t have that problem.I think that’s kind of a shame, because it feels to me like the actual world we live in is super weird. And there’s lots of super crazy stuff that I don’t know if it will happen, but certainly is internally consistent that it could happen. And I would really enjoy science fiction that just fleshed out all the crazy shit that could happen. I think it’s a little bit more work. And the basic problem is that most readers just don’t care at all. Or it’s incredibly rare for people to care much about the internal coherence of the world. So people aren’t willing to spend extra time or slightly compromise on how convenient things are narritively.I would guess that the most amusing story from the ones I listed are the ones that would actually make the best fiction would be … So I described one plot that was in Robin’s Age of Em scenario. Which I think is … If one, doesn’t fill in all the details, is a pretty coherent scenario. This is where you have a bunch of simulated humans who have mostly replaced normal humans in work who are alive during this brief period of maybe a few calendar years as we transition from simulated human brains to much more sophisticated AI.And in that world, the experience of an em is very very weird in a number of ways. One of which is it’s very easy to … Like you can put an em in a simulation of an arbitrary situation. You can copy ems. You can reset them. You can run an em a thousand times through a situation. Which I think is a really interesting situation to end up in. So I described a plot that sort of … yeah.I think if you consider the genre of con movies, I quite enjoy that genre. And I think it would be a really really interesting genre in the setting where it’s possible to take a person, to copy a person’s brain, to put them in simulations. Where people actually have a legitimate interest for wondering not only what is this person going to do in a simulation, but like what is this person going to do in a simulation when they’re simulating someone else. It’s like incredibly complicated, the dynamics of that situation. And also very conducive, yeah very conducive I think to amusing plots. So I’d be pretty excited to read that fiction.I think it’d be most amusing as a film. I don’t think it’s ever going to happen. I think none of them will happen. It’s very depressing."} {"prompt": "Maybe after the singularity, we’ll be so rich. We’ll be able to make all kinds of science fiction that appeals just to a handful of people.", "completion": "It will be super awesome. Yeah once we have really powerful AI, the AI can write for us."} {"prompt": "We can each have a single AI just producing films for one individual.", "completion": "Oh thousands of AIs, thousands of AIs just producing your one. It’s going to be so good."} {"prompt": "That’s the dream. Thanks so much for taking the time to come on the podcast, Paul. And also just in general, thanks so much for all the work that you’re putting into trying to make the world a better place. Well, at least the future in a better place.", "completion": "Thanks again for having me and thanks for running the podcast."} {"prompt": "My guest today is Dr. Paul Christiano. Back by popular demand, making his second appearance on The 80,000 Hours Podcast. Paul completed a PhD in Theoretical Computer Science at UC Berkeley and is now a technical researcher at OpenAI, working on aligning artificial intelligence with human values. He blogs about that work at ai-alignment.com and about a wide range of other interesting topics at sideways-view.com. On top of that, Paul is not only a scholar, but also always and everywhere a gentleman. Thanks for coming on the podcast, Paul.", "completion": "Thanks for having me back."} {"prompt": "I hope to talk about some of the interesting things you’ve been blogging about lately, as well as what’s new in AI reliability and robustness research. First, what are you doing at the moment and why do you think it’s important work?", "completion": "I guess I’m spending most of my time working on technical AI safety at OpenAI. I think the basic story is similar to a year ago, that is, building AI systems that don’t do what we want them to do, that push the long-term future in a direction that we don’t like, seems like one of the main ways that we can mess up our long-term future. That still seems basically right. I maybe moved a little bit more towards that being a smaller fraction of the total problem, but it’s still a big chunk. It seems like this is a really natural way for me to work on it directly, so I think I’m just going to keep hacking away at that. That’s the high level. I think we’re going to get into a lot of the details, probably in some questions."} {"prompt": "We started recording the first episode last year, almost exactly a year ago, actually. When it comes to AI safety research, and I guess your general predictions about how advances in AI are going to play out, have your opinions shifted at all? If so, how?", "completion": "I think the last year has felt a lot like, there’s no big surprises and things are settling down. Maybe this has been part of a broader trend where my view is– five years ago, my view was bouncing around a ton every year, three years ago, it was bouncing around a little bit, over the last year, has bounced around even less, so I think my views haven’t shifted a huge amount. I think we haven’t had either big downward or upward surprises in terms of overall AI progress. That is, I think we’ve seen things that are probably consistent with both concerns about AI being developed very quickly, but also like the possibility of it taking a very, very long time.In terms of our approach to AI alignment, again, I think my understanding of what there is to be done has solidified a little bit. It’s moved more– continues to move from some broad ideas of what should be done, to here at the particular groups implementing things. That’s continuing to happen but there haven’t been big surprises."} {"prompt": "Yes, last time we spoke about a bunch of different methods, including AI safety via debate. I mean different AIs debate one another and then we’re in a position to– Well, hopefully, we’re in a position to adjudicate which one is right. Is there any progress on that approach or any of the other ones that we spoke about?", "completion": "Yes. I work on a sub-team at OpenAI that probably works on that idea, the safety via debate, as well as amplification. I would say that over the last year, a lot of the work has been on some combination of building up capacity and infrastructure to make those things happen, such as scaling up language models and integrating with good large language models, so it’s things that understand some of the reasoning humans do when they talk or when they answer questions. Trying to get that to the point where we can actually start to see the phenomena that we’re interested in.I think there’s probably, generally, been some convergence in terms of how different– at least different parts within OpenAI, but I think also across organizations have been thinking about possible approaches. E.g. I guess within OpenAI, for people thinking about this really long-term problem, we mostly think about amplification and debates.There’s this on-paper argument that those two techniques ought to be very similar. I think they maybe suggest different emphases on which experiments you run in the short term. I think as we’ve been trying things, both the people who started more on the amplification side are running experiments that look more similar to what you might suspect from the debate perspective, and also vice versa, so I think there’s less and less big disagreements about that.I think similarly, independent thinking at, most people think about long-term safety at DeepMind, I guess I feel like there’s less gap between us now. Maybe that’s good because it’s easier to communicate and be on the same page and more shared understanding of what we’re doing. I think that compared to a year ago, things feel like this is just related to things settling down and maturing people are– It’s still a long way from being like we’re almost any normal field of academic inquiries, it’s nowhere close to that."} {"prompt": "Don’t people disagree more or just have very different perspectives and–?", "completion": "Yes, they disagree more, they less have of a common sense. They have less of a mature method of inquiry, which everyone expects to make progress. It’s still a long way away from more mature areas, but it is moving in that direction."} {"prompt": "This is maybe a bit random, but do you feel like academic fields are often held back by the fact that they codify particular methods, and particular kinds of evidence, and particular worldviews that blinkers them to other options? Maybe it’s an advantage to have this sort of research to be a bit more freewheeling and diverse.", "completion": "I think that’s an interesting question. I guess I would normally think of it as an academic field that’s characterized by this set of tools and its understanding of what constitutes progress. If you think of the field as characterized by problems, then it makes sense to talk about the field being blinkered in this way or having value left on the table. If you think about the field as characterized by this set of tools, then that’s the thing they’re bringing to the table.I would say from that perspective, it’s both bad that you can’t use some existing set of tools. That’s a bummer and it’s not clear. I think there’s a lot of debate with people about how much we should ultimately expect the solutions to look like using existing set of tools. That’s bad. It’s also sort of a little bit bad to not have yet like mature tools that are specific to this kind of inquiry. I think that’s more how I think of it.I think many academic fields are not good at– If you think of them as, they’re answering this set of questions and they’re the only people answering the set of questions maybe they’re not really set up that optimally to do that. I think I’ve shifted to not mostly thinking of academic fields that way."} {"prompt": "I guess economics has problems with its method, but then those are covered by other fields that use different methods. That would be the hope.", "completion": "That’s the hope."} {"prompt": "[laughs]", "completion": "I think economics is an interesting case since there are a bunch of problems. I think that most fields do have this like their bunch of problems sort of fit in economics and the set of tools economists use. If there’s a problem that fits in nominally, the economics meant– under their purview, which is not a good fit for their tools, then you’re in sort of a weird place. I think economics also may be, because of there being this broad set of problems that fit in their domain like– I think it’s not- this distinction is not obvious. There’s some–"} {"prompt": "It’s an imperial field kind of, notoriously. The ones that go and colonize it are like every other or questions that it can touch on. Then sometimes, I guess, yes, the method might be well suited to those questions that it wants to tackle.", "completion": "Yes, although in some sense, if you view it as like a field that has a set of tools that it’s using, it’s very reasonable to be going out and finding other problems that are actually– If you’re actually correct about them being amenable to those tools. I think there’s also a thing on the reverse, where like, you don’t want to be really that staking claim on these questions. You should be willing to say, “Look, these are questions that sort of we’ve traditionally answered but there are other people.” Sometimes, those can be answered in other ways."} {"prompt": "Yes. It’s an interesting framing of problems with academic fields that its kind of not so much that the field is bad, but maybe that it’s tackling the wrong problem or, it’s tackling problems that are mismatched to the methods.", "completion": "I think about this a lot maybe because in computer science, you more clearly have problems which like, it’s not so much staked out. It’s not like, “Here’s a problem and this problem fits in a domain.” It’s more like, there are several different approaches, like there are people who come in with a statistics training, and there are people who will come in as theorists, and there are people who come in as like various flavors of practitioners or experimentalists, and they sort of– You can see the sub-fields have different ways they would attack these problems. It’s more like you understand, like, this sub-field’s going to attack this problem in this way, and it’s a reasonable division of labor."} {"prompt": "Let’s back up. You talked about running experiments, what kind of experiments are they concretely?", "completion": "Yes. I think last time we talked, we discussed three kinds of big uncertainties or room for making progress. One of them, which isn’t super relevant to experiments is figuring out conceptual questions about how are we going to approach, like find some scalable approach to alignment? The other two difficulties, both were very amenable to different kinds of experiments.One of our experiments involving humans, where you start to understand something about the character of human reasoning, you understand sort of– We have some hopes about human reasoning, we hope that, in some sense, given enough time or given enough resources, humans are universal and could answer some very broad set of questions if they just had enough time, enough room to reflect. It’s like one class of experiments that’s sort of getting at that understanding. In what sense is that true? In what sense is that false? That’s a family of experiments I’m very excited about.OpenAI has recently started hiring people. We just hired two people who will be scaling up those experiments here. Ought has been focused on those experiments and it’s starting to really scale up their work. That’s one family of experiments. There’s a second difficulty or third difficulty, which is, understanding how both theoretical ideas about alignment and also, these facts about how human reasoning work. How do those all tie together with machine learning?Ultimately, at the end of the day, we want to use these ideas to produce objectives that can be used to train ML systems. That involves actually engaging with a bunch of detail about how ML systems work. Some of the experiments are directly testing those details, so saying, “Can you use this kind of objective? Can machine learning systems learn this kind of pattern or this kind of behavior?” Some of them are just experiments that are like– maybe more in the family of, “We expect them to work if you just iterate a little bit”. So you sort of expect there is going to be some way that we can apply language models to this kind of task, but we need to think a little bit about how to do that and take a few swings at it."} {"prompt": "I saw that OpenAI was trying to hire social scientists and kind of making the case that social scientists should get more interested in AI alignment research. Is this the kind of work that they’re doing, running these experiments or designing them?", "completion": "Yes, that’s right. I think we hired– we’re aiming initially to hire one person in that role. I think we’ve now made that hire and they’re starting on Monday. They’ll be doing experiments, trying to understand like, if we want to train to use human reasoning in some sense as a ground truth or gold standard, like how do we think about that? How do we think about– In what sense could you scale up human reasoning to answer hard questions? In what sense are humans a good judge of correctness or incentivize honest behavior between two debaters?Some of that is like, what are empirically the conditions under which humans are able to do certain kinds of tasks? Some of them are more conceptual issues, where like humans are just the way you get traction on that because humans are the only systems we have access to that are very good at this kind of flexible, rich, broad reasoning."} {"prompt": "I mentioned on Twitter and Facebook that I was going to be interviewing you again and a listener wrote in with a question. They had heard, I think, that you thought there’s a decent probability that things would work out okay or that the universe would still have quite a lot of value, even if we didn’t have a solid technical solution to AI alignment and AI took over and was very influential. What’s the reasoning there, if that’s a correct understanding?", "completion": "I think there’s a bunch of ways you could imagine ending up with AI systems that do what we want them to do. One approach which is, as a theorist, the one that’s most appealing to me, is to have some really good understanding on paper. Like, “Here’s how you train an AI to do what you want,” and we just nail the problem in the abstract before we’ve even necessarily built a really powerful AI system.This is the optimistic case where we’ve really solved alignment, it’s really nailed. There’s maybe a second category where you’re like, or this broad spectrum where you’re like, “We don’t really have a great understanding on paper of a fully general way to do this, but as we actually get experienced with these systems, we get to try a bunch of stuff.” We get to see what works, we got to– If we’re concerned about a system failing, we can try and run it in a bunch of exotic cases and just try and throw stuff at it and see. Maybe if its like- we stress-tested enough on something that actually works.Maybe we can’t really understand a principled way to extract exactly what we value, but we can do well enough at constructing proxies. There’s this giant class of cases where you like, don’t really have an on-paper understanding but you can still wing it. I think that’s probably not what the asker was asking about. There’s a further case where you try and do that and you do really poorly, and as you’re doing it, you’re like, “Man, it turns out these systems do just fail in increasingly catastrophic ways. Drawing the line out, we think that could be really bad.”I think for now, even in that worst case that you don’t have an on-paper understanding, you can’t really wing it very well. I still think there’s certainly more than a third of a chance that everything is just good, and that would have to come through, like people probably understanding that there’s a problem, having a reasonable consensus about, it’s a serious problem, being willing to make some sacrifices in terms of how they deploy AI. I think that at least on paper, many people would be willing to say, like, “If really rolling AI out everywhere would destroy everything we value, then we are happy to be more cautious about how we do that, or roll it out in a more narrow range of cases, or take development more slowly.”"} {"prompt": "People showing restraint for long enough to kind of patch over the problems well enough to make things okay.", "completion": "Yes. Somehow, there’s a spectrum of how well– some substitution between how much restraint you show and how much you are able to either ultimately end up with a clean understanding or wing it. One-third is my number if it turns out that winging it doesn’t work at all, like we’re totally sunk, such that you have to show very large amounts of restraint. People have to actually just be like, “We’re going to wait until they are so much smarter.” We’ve either used AI to become much smarter, or better able to coordinate, better able to resolve these problems, or something like that. You have to wait until that’s happened before you’re actually able to deploy AI in general.I think that’s still reasonably likely. I think that’s a point where lots of people disagree, I think, on both ends. A lot of people are much more optimistic, a lot of people have the perspective that’s like, “Look, people aren’t going to walk into razorblades and have all the resources in the world get siphoned away or like deploy AI in a case where catastrophic failure would cause everyone to die.” Some people have the intuition like, “That’s just not going to happen and we’re sufficiently well coordinated to avoid that.”I’m not really super on the same page there. I think if it was a really hard coordination problem, I don’t know, it looks like we could certainly fail. On the other hand, some people are like, “Man, we can’t coordinate on anything.” Like if there was a button you could just push to destroy things or someone with $1 billion could push to really mess things up, things would definitely get really messed up. I just don’t really know.In part, this is just me being ignorant and in part, it’s me being skeptical of both of the extreme perspectives, like when people advocating them are also about as ignorant as I am of the facts on the ground. I certainly think there are people who have more relevant knowledge and who could have much better calibrate estimates if they understood the technical issues better than I do. I’m kind of at like some form of pessimism. If things were really, really bad, if we really, really don’t have an understanding of alignment, then I feel pessimistic, but not radically pessimistic."} {"prompt": "Yes. It seems like a challenge there is that you’re going to have a range of people who can have a range of confidence about how safe the technology is. Then you have this problem that whoever thinks it’s the safest, is probably wrong about that because most people disagree and they’re the most likely to deploy it prematurely.", "completion": "Yes. I think it depends a lot on what kind of signals you get about the failures you’re going to have, so like how much you have a– Yes, we can talk about various kinds of near misses that you could have. I think the more clear of those are, the easier it is for there to be enough agreement. That’s one thing.A second thing is we’re concerned or I’m concerned about a particular kind of failure that really disrupts the long-term trajectory of civilization. You can be in a world where that’s the easiest kind of failure. That’s sort of getting things to work in practice, is much easier than getting them to work in a way that preserves our intention over the very long term.You could also imagine worlds though where a system which is going to fail over the very long term, is also reasonably likely to be a real pain in the ass to deal with in the short term. In which case, again, it will be more obvious to people. Then, I think a big thing is just– we do have techniques, especially if we’re in a world where AI progress is very much driven by large amounts of giant competing clusters.In those worlds, it’s not really like any person can press this button. It’s one, there’s a small number of actors. The people who are willing to spend, say, tens of billions of dollars and two, they are those actors who have some room to sit down and reach agreements or like- which could be formalized to varying degrees, but it won’t be like people sitting separately in boxes making these calls.At worst, it’ll be like in that world. At worst, still be like a small number of actors who can talk amongst themselves. At best, it’ll be like a small number of actors who agree like, here are norms. We’re going to actually have some kind of monitoring and enforcement to ensure that even if someone disagreed with the consensus, they wouldn’t be able to mess things up."} {"prompt": "Do you think you or OpenAI have made any interesting mistakes in your work on AI alignment over the years?", "completion": "I definitely think I have made a lot of mistakes, which I’m more in a position to talk about."} {"prompt": "[laughs] Go for it. [laughs]", "completion": "I guess there’s, yes, one category or there’s been a lot of years I’ve been thinking about alignment, so that’s a lot of time to rack up mistakes made. Many of which aren’t as topical though. It was a class of intellectual mistakes, I feel like, I made like four years ago or say, five years ago, when I was much earlier in thinking about alignment, which we could try and get into.I guess my overall picture of alignment has changed a ton since six years ago. I would say that’s basically because six years ago, I reasoned incorrectly about lots of things. It’s a complicated area. I had a bunch of conclusions I reached. Lots of the conclusions were wrong. That was a mistake. Maybe an example of a salient update is I used to think of needing to hit this, like you really need to have an AI system that understands exactly what humans want over the very long term.I think my perspective shifted more to something maybe more like a commonsensical perspective of, if you have a system which sort of respects short-term human preferences well enough, then you can retain this human ability to course correct down the line. You don’t need to appreciate the full complexity of what humans want, you mostly just need to have a sufficiently good understanding of what we mean by this course correction, or remaining in control, or remaining informed about the situation.I think it’s a little bit hard to describe that update concisely, but it does really change how you conceptualize the problem or what kinds of solutions are possible. That’s an example of a long-ago, or that there’s a whole bunch of those that have been racked up over many years. Certainly I also made a ton of low-level tactical mistakes about what to work on. Maybe a more recent mistake that is salient is like, I don’t feel like I’ve done very well in communication about my overall perception of the problem. That’s not just expressing that view to others but also really engaging with reasons that maybe more normal perspectives are skeptical of it.I’ve been trying to work a little bit more and I’m currently trying to better pin down, here is a reasonably complete, reasonably up-to-date statement of my understanding of the problem, and how I think we should attack the problem. Really iterating on that to get to the point where it makes sense to people who haven’t spent years thinking in this very weird style that’s not well-vetted. I’m pretty excited about that. I think that’s probably something I should’ve been doing much more over the last two years."} {"prompt": "Have you seen Ben Garfinkel’s recent talk and blog post about how confident should we be about all of this AI stuff?", "completion": "I think I probably have seen a Google Doc or something. Yeah."} {"prompt": "Do you have any views on it, if you can remember it? [laughs]", "completion": "I think there are lots of particular claims about AI that I think were never that well-grounded but people were kind of confident in, which I remain pretty skeptical about. I don’t remember exactly what he touches on in that post, but claims about takeoff, I think people have, of a really or very, very rapid AI progress and particularly claims about the structure of that transition. I think people would have pretty strong, pretty unconventional views. I guess to me it feels like I’m just taking more of an agnostic opinion, but I think to people in the safety community, it feels more like I’m taking this outlier position. That’s definitely a place where I agree with Ben’s skepticism.I think in terms of the overall, how much is there an alignment problem? I think it’s right to have a lot of uncertainty thinking about it and to understand that the kind of reasoning doing is pretty likely to go wrong. I think you have to have that in mind. That said, I think it is clear, there’s something there. I don’t know if he’s really disagreeing with that."} {"prompt": "I think his conclusion is that he– it’s well worth quite a lot of people working on this stuff, but a lot of the arguments that people have made for that are not as solid as maybe we thought when you really inspect all the premises and think, yes.", "completion": "Yes. I definitely think it’s the case that people have made a lot of kind of crude arguments and they put too much stock in those arguments."} {"prompt": "One point that he made which stood out to me was, there’s been technologies that have dramatically changed the world in the past, electricity, for example, but it’s not clear that working on electricity in the 19th century would have given you a lot of leverage to change how the future went. It seems like even though it was very important, it was just on a particular track and there was only so much that even a group of people could have steered how electricity was used in the future. It’s like possible that AI will be similar. It’d be very important, but also that you don’t get a ton of leverage by working on it.", "completion": "Yes. I think it’s an interesting question. Maybe a few random comments are like one, it does seem like you can accelerate the adoption. If you had an understanding early enough– and I’m not exactly sure how early you would have acted to get much leverage. If you understand the problem early enough, you could really change the timeline for adoption. You can really imagine the small groups having pushed adoption forward by six months or something to the extent of like–There are a lot of engineering problems and conceptual difficulties that were distinctive to this weird small thing, which in fact did play a big– sometimes there’s part of the overall machine and trajectory of civilization, but it really was well-leveraged and progress. faster progress in that area seems like it would have had unusually high dividends for faster overall technological progress.Maybe going along with that. I think it is also reasonable to think that if a small group had positioned themselves to understand that technology well and be pushing it and making investments in it, they probably could have had– They couldn’t like have easily directly steered from a great distance, but they could’ve ended up in a future situation where they’ve made a bunch of money or in a position to understand well an important technology, which not that many people understand well as it gets rolled out.I think that’s again, a little bit different from the kind of thing he’s expressing skepticism about. It seems like an important part of the calculus if one is thinking about trying to have leverage by working on AI, thinking about AI.I do think the alignment problem is distinctive from anything you could have said in the context of electricity. I’m not mostly trying to do the like , “Make investments in AI sets you in a better position to have influence later, or make a bunch of money”. I’m mostly in the like, “I think we can identify an unusually crisp issue,” which seems unusually important and can just hack away at that. It seems like it should have a lot of question marks around it, but I don’t really know if historical cases which use a similar heuristic.Sometimes people cite them and I’d tried to look into a few of them, but I don’t know historical cases where you would have made a similarly reasonable argument, and then ended up feeling really disappointed."} {"prompt": "Do you have any thoughts on what possible or existing AI alignment work might yield the most value for a traditional person or million dollars that it receives at the moment?", "completion": "Yes. I mentioned earlier these three categories of difficulties. I think different resources will be useful in different categories, and each of them is going to be best for some resources, like some people, or some kinds of institutional will. Briefly going over those again, one was conceptual work on how is this all going to fit together if we imagine what kinds of approaches potentially scale to very, very powerful AI systems, and what are the difficulties in that limit as systems become very powerful? I’m pretty excited for anyone who has reasonable aptitude in that area to try working on it.It’s been a reasonable fraction of my time over the last year. Over my entire career, it’s been a larger fraction of my attention, and it’s something that I’m starting to think about scaling up again. This is thinking, doing theoretical work directly at the alignment problem. Asking on paper, “What are the possible approaches to this problem? How do we think that this will play out moving towards having a really nailed down solution that we’ll feel super great about?” That’s one category. It’s a little bit hard to down money on that, but I think for people who like doing the theoretical or conceptual work, that’s a really good place to add such people.There’s a second category that’s like this sort of understanding facts about human reasoning. Understanding in the context of debate, can humans be good judges between arbitrating different perspectives, competing perspectives? How would you set up a debate such that in fact, the honest strategy wins an equilibrium? Or on the amplification side, asking about this universality question, is it the case you can decompose questions into at least slightly easier questions.I’m also pretty excited about throwing people at that. Just running more experiments, trying to actually get practice, engaging in this weird reasoning and really seeing, can people do this? Can we iterate and try and identify the hard cases? I’m pretty excited about that. I think it involves some overlap in the people who do those kinds of work, but it maybe involves different people.There’s this third category of engaging with ML and actually moving from the theory to implementation. Also getting in a place with infrastructure, and expertise, and so on to implement whatever we think is the most promising approach. I think that again, requires a different kind of person still and maybe also requires a different kind of institutional will and money. It’s also a pretty exciting thing to me. That maybe it can both help provide a sanity check for various ideas coming out of the other kinds of experiments, and it can also be a little bit more of this being in a position to do stuff in the future.We don’t necessarily know exactly what kind of alignment work will be needed, but just having institutions, and infrastructure, and expertise, and teams that have experience thinking hard about that question, actually building ML systems, and trying to implement say, “Here’s our current best guess. Let’s try and use this alignment or integrate these kind of ideas about alignment into state-of-the-art systems.” Just having a bunch of infrastructures able to do that seems really valuable.Anyway, those are the three categories where I’m most excited about throwing resources on alignment work. I mostly don’t think– It’s very hard to talk in the abstract about which one’s more promising, just because there’s going to be lots of comparative advantage considerations, but I think there’s definitely a reasonable chunk of people that push, “I think it’s best to go into any of those three directions.”"} {"prompt": "Changing gears into something a bit more whimsical about a blog post that I found really charming. You’ve argued recently that a potentially effective way to reduce existential risk would be to leave messages somewhere on earth for our descendants to find in case civilization goes under or humans go extinct and then life reappears, intelligent life reappears on the earth, and we maybe want to tell them something to help them be more successful where we failed. Do you want to outline the argument?", "completion": "Yes. The idea is, say, humanity– if every animal larger than a lizard was killed and you still have the lizards, lizards have a long time left before the lizards would all die, before photosynthesis began breaking down. I think based on our understanding of evolution, it seems reasonably likely that in the available time, lizards would again be able to build up to a spacefaring civilization. It’s definitely not a sure thing and it’s a very hard question to answer, but my guess would be more likely than not, lizards will eventually be in a position to also go and travel to space."} {"prompt": "It’s a beautiful image.[laughter]", "completion": "That’s one place where we’re like, “That’s a weird thing.” There’s another question, “How much do you care about that lizard civilization?” Maybe related to these other arguments, related to weird decision theory arguments about how nice should you be to other value systems? I’m inclined to be pretty happy if the lizards take our place. I prefer we do it, but if it’s going to be the lizards or nothing, I consider a real– I would really be inclined to help the lizards out."} {"prompt": "Maybe this is too much of an aside here, but I kind of– In that case, I had the intuition that, “Yes, future lizard people or humans now…” It’s like I’m not sure which is better. It’s like humans were drawn out of the pool of potential civilizations. It’s not obvious whether we’re better or worse than like– If you reran history with lizards rather than people. I just wanted to jump in because some of my colleagues pointed out that apparently there’s some insane conspiracy theory out there about so-called ‘lizard people’ secretly running the world, which I hadn’t heard of. To avoid any conceivable possible confusion, what we’re talking about has nothing to do with any such ‘lizard people’. [laughter]‘Lizard people’ is just our jokey term for whatever intelligent life might one day re-evolve on Earth, many millions or hundreds of millions of years into the future, should humans at some point die out. Perhaps ‘lizard people’ was a slightly unfortunate turn of phrase in retrospect! OK, on with the show.", "completion": "Yes. I think it’s an interesting question. I think this is, to me, one of the most– it’s related to one of the most important open philosophical questions, which is, just in general, what kinds of other value systems should you be happy with replacing you? I think the lizards would want very different things from us and on the object level, the world they created might be quite different from the world we would’ve created.I share this intuition of like, I’m pretty happy for the lizards. It’s like I’d feel pretty great. If I’m considering, should we run a risk of extinction to let the lizards take over? I’m more inclined to let the lizards take over than run a significant risk of extinction. Yes, it’s like I would be happy. If there’s anything we could do to make life easier for the lizards, I’m pretty excited about doing it."} {"prompt": "I’m glad we’ve made this concrete with the lizard people.[laughter]Okay, carry on.", "completion": "I’d say lizards also in part because if you go too much smaller than lizards, at some point, it becomes more dicey. If you only had like plants, it’s a little bit more dicey whether if they have enough time left. Lizards, I think, are kind of safe-ish. Lizards are pretty big and pretty smart. Most of the way to spacefaring.Then there’s this question of, what could we actually do? Why is this relevant? The next question is, is there a realistic way that we could kill ourselves and all the big animals without just totally wiping out life on earth or without replacing ourselves with say AIs pursuing very different values? I think by far the most likely way that we’re going to fail to realize our values is like we don’t go extinct, but we just sort of are doing the wrong thing and pointing in the wrong direction. I think that’s much, much more likely than going extinct.My rough understanding is that if we go extinct at this point, we will probably take really most of the earth’s ecosystem with us. I think if you thought that climate change could literally kill all humans, then you’d be more excited. It’s like there’s some plausible ways that you could kill humans but not literally kill everything. It’s a total really brutal collapse of civilization. Maybe there’s like some kinds of bioterrorism that kill all large animals but don’t kill- or kill all humans, but don’t necessarily kill everything.If those are plausible, then there’s some chance that you end up in this situation where we’ve got the lizards and now it’s up to the lizards to colonize space. In that case, it does seem like we have this really interesting lever, where lizards will be evolving over some hundreds of millions of years. They’ll be like in our position some hundreds of millions of years from now. It does seem probably realistic to leave messages that is to like, somehow change earth such that a civilization that appeared several hundred million years later could actually notice the changes we’ve made and could start investigating them.At that point, we would probably have like, if we’re able to call the attention of some future civilization to a particular thing, I think then we can encode lots of information for them and we could decide how we want to use that communication channel. Sometimes people talk about this, they normally are imagining radically shorter time periods than hundreds of millions of years, and they’re normally not being super thoughtful about what they’d want to say. I think my guess would be that like, if there are ways– You could really substantially change the trajectory of civilization by being able to send a message from a much, much more–If you imagine like the first time that humans could have discovered a message sent by a previous civilization, it would have been– I mean it depends a little bit on how you’re able to work this out, but probably at least like a hundred years ago. At that point, the message might’ve been sent from a civilization which was much more technologically sophisticated than they are. Also, which has like experienced an entire civil– the entire arc of civilization followed by extinction.At a minimum, it seems like you could really change the path of their technological development by like selectively trying to spell out for them or show them how to develop- how to achieve certain goals. You could also attempt, although it seems like a little bit more speculative to help set them on a better course and be like, “Really, you should be concerned about killing everyone.” It’s like, “Here’s some guidance on how to set up institutions so they don’t kill every new one.”I’m very concerned about AI alignment, so I’d be very interested as much as possible being like, “Here’s the thing, which upon deliberation we thought was a problem. You probably aren’t thinking about it now, but FYI, be aware.” I do think that would put a community of people working on that problem and that future civilization into a qualitatively different place than if like– It’s just sort of– I don’t know.It’s very hard to figure out what the impact would be had we have stumbled across these very detailed messages from the past civilization. I do think it could have a huge technological effect on the trajectory of development, and also reasonably likely have a reasonable effect either on deliberation and decisions about how to organize ourselves or on other intellectual projects."} {"prompt": "Yes. Give this hypothetical again, could we have made history go better if we could just send as much text as we wanted back to people in 1600 or 1700? Then it kind of on reflection does seem like, “Well yes, we could just send them lots of really important philosophy and lots of important discoveries in social science, and tell them also the things that we value that maybe they don’t value. Like speed up kind of the strains of philosophical thought that we think are particularly important.”", "completion": "You also just choose what technology– [chuckles] like pick and choose from all the technologies that exist in our world and be like, “Here’s the ones we think are good on balance.”"} {"prompt": "Right, yes. You just like, you don’t give them the recipe for nuclear weapons. Instead, you give them the game theory for mutually assured destruction so they can– or you like tell them everything we do about how to sustain international cooperation, so whenever they do develop nuclear weapons, they’re in a better position to not destroy themselves.", "completion": "Yes, and “Here’s a way to build a really great windmill.”[laughter]"} {"prompt": "[laughs] Yes, “Here’s solar panels. Why not? Yes, get some solar panels stuff.”", "completion": "I don’t know how much good you could do with that kind of intervention and it’s a thing that would be interesting to think about a lot more. My guess would be that there’s some stuff which in expectation is reasonably good, but it’s hard to know."} {"prompt": "Yes. There’s a pretty plausible case that if humans went extinct, intelligent life might reemerge. Probably, if we thought about it long enough, we could figure out some useful things that we could tell them that would probably help them and give them a better shot at surviving, and thriving, and doing things that we value. How on earth would you leave a message that could last hundreds of millions of years? It seems like it could be pretty challenging.", "completion": "Yes, I think there’s two parts of the problem. One part is calling someone’s attention to a place. I think that’s the harder part by far. For example, if you were to like– you can’t just bury a thing in most places on earth, because hundreds and millions of years is long enough in that the surface of the earth is no longer the surface of the earth. I think the first and more important problem is calling someone’s attention to a spot or to one of a million spots or whatever.Then the second part of the problem is, after having called someone’s attention to a spot, how do you actually encode information? How do you actually communicate it to them? It’s also probably worth saying, this comes from a blog post that I wrote. I expect, I think that there are people who have a much deeper understanding of these problems, that have probably thought about many of these exact problems in more depth than I have. I don’t want to speak as if I’m like a–"} {"prompt": "An authority– leaving messages for future civilizations. [laughs]", "completion": "That’s right. I thought about it for some hours.[laughter]In terms of calling attention, I thought of a bunch of possibilities in the blog post that I was interested in– started some discussions online with people brainstorming possibilities. I think if we thought about a little bit, we could probably end up with a clearer sense.Probably the leading proposal so far is, I think Jan Kulveit had this proposal of– There’s this particularly large magnetic anomaly in Russia, which is very easy for civilization to discover quite early, and which is located such that it’s unlikely to move as tectonic plates move. It seems pretty plausible, it’s a little bit difficult to do but it’s pretty plausible that you could use modifications to that structure or locating things and Schelling points in the structure in a way that at least our civilization would very robustly have found. It’s hard to know how much a civilization quite different from ours would have…"} {"prompt": "You said, just the straightforward idea of a really big and hard rock that’s jots out of the earth. Hopefully, we’ll survive long enough to be– [crosstalk]", "completion": "Yes, it’s really surprisingly hard to make things like that work. [chuckles]"} {"prompt": "Yes, I guess it’s over that period of time, even a very durable rock is going to be broken down by erosion.", "completion": "Yes. Also stuff moves so much. Like you put the rock on the surface of the earth, it’s not going to be on the surface of the earth in hundreds of millions of years anymore."} {"prompt": "It just gets buried somehow. Yes, interesting. [crosstalk]", "completion": "Surprisingly, I really updated a lot towards it being rough. When I started writing this post, I was like, “I’m sure this is easy,” and I was like, “Aw jeez, Really, basically, everything doesn’t work.”"} {"prompt": "What about a bunch of radioactive waste that would be detectable by Geiger counters?", "completion": "Yes, so you can try and do things– You have to care about how long these things can last, and how easy they are to detect, and how far from the surface they remain detectable, but I think there are options like that, that work. [chuckles] I think also magnets for me and magnets are longer-lasting than we might have guessed and a reasonable bet. I think it can be easily as effective."} {"prompt": "You made this point that you can literally have thousands of these sites and you can make sure that in every one, there’s a map of where all the others are, so they only have to find one. Then they can just go out and dig up every single one of them, which definitely improves the odds.", "completion": "Yes. Also, there are some fossils around, so if you think you got a million very prone-to-be fossilized things, then it’s probably not going to work. Yes, I haven’t thought about that in a while. I think probably if you sat down though, if you just took a person, and that person spent some time really flushing out these proposals, and digging into them, and consulting with experts, they probably find something that would work.Similarly, on the social side, if you thought about a really long time, expect you could find– you sort of have a more conservative view about whether there’s something to say that would be valuable. The first step would be, do you want to pay someone to spend a bunch of time thinking about those things? Is there someone who’s really excited to spend a bunch of time thinking about those things, nailing down the proposals? Then seeing whether it was a good idea and then if it was a good idea, spending millions or tens of millions of dollars you need to do to actually make it happen."} {"prompt": "In terms of how you would encode this information, it seemed like you thought of probably just etching it in rock would be a plausible first pass. That would probably be good enough for most of the time. You could probably come up with some better material on which you could etch things that is very likely to last a very long time. At least if it’s buried properly.", "completion": "I think other people have thought more about this aspect of the problem and I think in general, with more confidence, something will work out, but I think just etching stuff is already good enough under reasonable conditions. It’s a lot easier to have a little thing that will survive. It’s easier to have a small thing that will survive for hundreds of millions of years than to disfigure the earth in a way that will be noticeable and would call someone’s attention to it in hundreds of millions of years."} {"prompt": "Okay, this brings me to the main objection I had, which is that the lizard people probably don’t speak English, and so even if we bury Wikipedia, I think they might just find it very confusing. How is it clear that we can communicate any concepts to lizard people in a hundred million years time?", "completion": "Yes, I think that’s a pretty interesting question. That goes into things you want to think about. I do think when people have historically engaged in the project of like trying to figure out easy, like if you have a lost language or you have some relatives you’re trying to make sense of, you’re really in a radically worse position than like the lizard people would be in with respect to this artifact, since we would have put a lot of information into it really attempting to be understood. I think we don’t really have examples of humans having encountered this super information-rich thing that’s attempting to be understood.I guess this is like a game, you can try and play it amongst humans and I think humans can win very easily at it, but it’s unclear the extent to which, it’s because we have all this common context and because I think humans do not need anything remotely resembling language because art easily wins this game. In order to easily build up the language of concepts just by simple illustrations, and diagrams, and so on.I think it’d be right to be skeptical of even when it’s not a language, we just are using all of these concepts that are common. We’ve thought about things in the same way, we know what we’re aiming at. I think I’m like reasonably optimistic, but it’s pretty unclear. This is also a thing that I guess people have thought about a lot, although in this case, I’m a lot less convinced in their thinking, than in the ‘writing stuff really small in a durable way case’."} {"prompt": "My understanding was that the people who thought about it a lot seemed very pessimistic about our ability to send messages. Well, I guess, to be honest, the only case I know about is, there was a project to try to figure out what messages should we put at the site where we’re burying really horrible nuclear waste. You’re putting this incredibly toxic thing under the ground and then you’re like, “Wow, we don’t want people in the future to not realize what this is, and then dig it up, and then kill themselves.”There was quite a lot of people, I guess linguists, sociologists, all these people who were trying to figure out what signals do we put there? Is it signs? Is it pictures? Whatever it is. They settled on some message that I think they drew out in pictures, that was, I thought, absolutely insanely bad because it was like– I couldn’t see how any future civilization would interpret it as anything other than like religious stuff that they would be incredibly curious about, and then would absolutely go and dig it up.[laughter]I’ll find the exact message that they decided to communicate and potentially read it out here, and people could judge for themselves.—Rob Wiblin: Hey folks, I looked up this message to add in here so you can pass judgement on it. Here it is:“This place is a message… and part of a system of messages …pay attention to it!Sending this message was important to us. We considered ourselves to be a powerful culture.This place is not a place of honor… no highly esteemed deed is commemorated here… nothing valued is here.What is here was dangerous and repulsive to us. This message is a warning about danger.The danger is in a particular location… it increases towards a center… the center of danger is here… of a particular size and shape, and below us.The danger is still present, in your time, as it was in ours.The danger is to the body, and it can kill.The form of the danger is an emanation of energy.The danger is unleashed only if you substantially disturb this place physically. This place is best shunned and left uninhabited.”As I said, I really think a future civilization, human or otherwise, would be insanely curious about anything attached to a message like that, and would guess that the site was religious in nature. If they hadn’t learned about nuclear radiation themselves already, I think they’d be more likely to dig at that spot than if it were simply left unmarked. Alright, back to the conversation.—Anyway, they did have this– I think actually the plan there was to write it in tons of languages that exist today in the hope that one of those would have survived. That was one of the options.", "completion": "That’s not going to be an option here."} {"prompt": "Not an option here.", "completion": "I think it’s quite a different issue– It’s different if you want to make a sign, so someone who encounters that sign can tell what it’s saying versus if I want to write someone a hundred million words, such as like somehow if they’re willing to spend– if we encountered a message from some civilization that we can tell has technological powers much beyond our own, we’re like, “Okay, that’s really high up on our list of priorities. I don’t know what the hell they’re talking about.” It’s just a very different situation where they were in this huge amount of content. It’s like the most interesting academic project of all academic– it goes to the top of the intellectual priority queue upon discovering such a thing.I have a lot more confidence in our ability to figure something out or a civilization who has a similar ability to us to figure something out under those conditions. Than under like, they’re walking around, they encounter a sign– perhaps they’re somewhat primitive at this point. I also have no idea what’s up with it. It’s also just not that much content. It’s unclear how you– in the case where you’re like are only giving them like 10,000 words of content or some pictures, they just don’t have enough traction to possibly figure out what’s up.Whereas, in this case, we have– we’re not just coming in with one proposal of how you could potentially build a shared conceptual language, we’re like, “We have a hundred proposals, we’re just trying them all, just every proposal like any fourth-grader came up with.“That’s fine. Throw it in there too.” [laughs] Bits are quite cheap so you can really try a lot of things in a much– Yes, I think it’s just a much better position than people normally thinkabout."} {"prompt": "I think archaeologists, when they’ve dug up a writing, sometimes they’ve decoded it by like analogy to other languages that we do have records about. Sometimes they’re just like the Rosetta Stone where it’s like, “Now, here we’ve got a translation so then we can figure out what that–” I think they had like a translation for two of them and there was a third language that was the same thing. Then they could figure out what the language sounded like from that, and then figure out very gradually what the words meant.I think there’s other cases where just from context, they’ve dug up stones and they’re like, “What is this?” It turns out that it’s a bunch of financial accounts for a company, and they’re like, figuring out like imports and exports from this place, which like makes total sense. You can imagine that they’ll be doing that. Your hope here is that we will just bury so much content, and then we’ll have like a bunch of pictures, like lots of words, repeating words, that eventually, they’ll be able to decode.They’ll figure out from some sort of context, I guess, they’ll be flicking through the encyclopedia and then they’ll find one article about a thing that they can figure out what it is, because they also have this thing. They’re like trees. Okay, we’ve got the article about trees and we still have trees. Then they kind of work out, “Well, what would I say about trees if I was writing an encyclopedia? They read an article about trees, so they guess what those words are. Then they kind of go out from there.", "completion": "We can make things a lot simpler than encyclopedia articles where you can be like, “Here’s a lexicon of a million concepts. For each of them or whatever, 10,000 concepts. For each of them, a hundred pictures, and a hundred sentences about them, and a hundred attempts to define them. Attempted to organize well."} {"prompt": "Yes. Okay, I agree. I think if you went to that level, then probably you could do it. Although some concepts might be extremely hard to illustrate.[laughter]", "completion": "Yes, I’m more optimistic about like communi– Well, I don’t know. Communicating technology seems easier than–"} {"prompt": "Just like, “Here’s a picture of a steam engine.” Whereas, maybe philosophy is a bit trickier or religion. In the blog post, you suggested. That this might be a pretty good bang for your buck in terms of reducing existential risk. I think you had a budget of $10 million for a minimum viable product of this. You were thinking, “Yes, this could improve their odds of surviving by one percentage point is if we’re very careful about what messages we send them and what messages we don’t send them.” Do you still think something like that?The budget of $10 million seemed incredibly low to me. I guess here we’ve been envisaging something potentially a lot more ambitious than perhaps what you were thinking about at the time.", "completion": "Yes, $10 million, I think, does seem– After talking to people about what the actual storage options are or how to make a message, how the biggest people could find a message, $10 million seems low and $100 million seems probably more realistic, which makes cost-effectiveness numbers worse.I think it is worth pointing out that you have to go separately on the– If you imagine three phases, four phases of the project: figuring out what to say, somehow making a landmark people can identify, actually including a bunch of information, and then actually writing, trying to communicate the information, the thing that you wanted to say.If any one of those is expensive, you can relatively easily bring the others up to the same cost.If we’re getting to spend millions of dollars on each of those phases. I think actually, I’m probably imagining the lion’s share of the cost going into leaving a landmark, but that still leaves you with millions of dollars to spend on other components, which is a few people working full-time for years."} {"prompt": "I would have thought that the most difficult thing would be to figure out what to say and then figure out how to communicate it. If we’re talking about, it’s like drawing pictures for every word that we think lizard people would be able to understand, that seems more like a lot of homework.[laughter]", "completion": "I think it’s hard to ballpark the cost of that kind of work. Are we talking a hundred-person years or a thousand-person years? How many person years of effort is that? You can think about how many person years of effort go into reasonable encyclopedias. It’s tricky thinking about the costs. I think at $100 million, I feel good about how thoroughly– again, you’re not going to be able to have a great answer what to send, but you’re going to have an answers supported by people who are going to think a few years. I guess probably if you’re doing this project, you’re doing it under a certain set of rules.This project is already predicated on a bunch a crazy views about the world, and so you’re making an all out bet on those crazy views about the world. When you’re doing these other stages, you’re also sort of just conditioning on those crazy views about the world being correct, about what basic things are important, and how things basically work, which I think does in some sense help. You only have to eat those factors of those crazy views being right ones. You don’t have to pay them again.I guess I’ve always imagined that it would take less than a few person years of effort to produce like– if I wanted to produce something that could be understood by future civilization. Maybe I’m just way too optimistic about that. I haven’t engaged with any of the communities that have thought about this problem in detail. Totally possible that I’m way off base.Anyway, when I imagine people spending 10 years on that, I’m like, “10 years? That seems pretty good. It seems that they’re going to have this nailed. They’re going to have tested it a bunch of times. They’re going to have like six independent proposals that are implemented separately. Each of them is going to be super exhaustive with lots of nice pictures.” Nice pictures are actually a little bit hard, but they probably just get these bits and they’re like, “What do they do with all the bits?”"} {"prompt": "Should listeners maybe fund this idea? Has anyone expressed interest in being the team lead on this?", "completion": "Yes, there’ve been some conversations, very brief conversations of the landmarking step. I think that’s probably the first thing I would be curious about. What is the cost like? I don’t think it’s a big project to be funded yet. I don’t think anyone’s really expressed interest in taking it up and running with it. [chuckles] I think the sequence would probably be, first check to see if the landmark thing makes sense and roughly, how it’s going to survive if it would necessarily be. Then think about the– Maybe do a sanity check on all the details, and then start digging in a little bit for a few months on how you would send things and how good does it actually look? Then six months in, you’d be like, now we have a sense of whether this is a good deal."} {"prompt": "If one of you listeners out there is interested in taking on this project, send me an email because you sound like a kind of fun person.[laughter]Do you have any other neglected or crazy sounding ideas that might potentially compare favorably to more traditional options for reducing existential risk?", "completion": "I do think it’s worth caveating, I think, if there’s any way to try and address AI risk, that’s probably going to be better than this kind of thing related to my comparative advantage seeming to be in AI risk stuff. In terms of weird, altruistic schemes, I feel like I haven’t thought that much about this kind of thing over the last year. I don’t have anything that feels both very weird and very attractive."} {"prompt": "[laughs] What about anything that’s just attractive? I’ll settle. [chuckles]", "completion": "I remain interested in– There’s a few things we discussed last time that, maybe very shallowly or maybe we didn’t have a chance to touch on, but I remain excited about. Some basic test of interventions that may affect cognitive performance seem like pretty weirdly neglected. Right now, I’m providing some funding to some clinical psychiatrists in Germany to do a test of creatine in vegetarians, which seems pretty exciting. I think the current state of the literature on carbon dioxide and cognition is absurd. I probably complained about this last time I was here. It’s just– [crosstalk]"} {"prompt": "Let’s dive into this. It was a mistake of mine not to put these questions in. Just to go back on this creatine issue, there’s been some studies, one study in particular that suggested that for vegetarians and potentially for non-vegetarians as well, taking creatine gives you an IQ boost of a couple of points. It was very measurable even with a relatively small sample. This was a pretty big effect size by the standards of people trying to make people smarter.", "completion": "Small by the standards of people normally looking for effects. Like a third of a standard deviation. This is respectable, but it’s huge, I don’t know of many interventions being that effective."} {"prompt": "Yes. If we can make like everyone three IQ points smarter, that’s pretty cool. Then there was just not much follow up on this even though it seems like this is way better than most of the other options we have for making people smarter other than, I suppose, improving health and nutrition.", "completion": "Yes, this review is on the effects in omnivores. That’s been better studied. I think it doesn’t look that plausible that it has large effects in omnivores and there’s been some looking into mechanisms, and in terms of mechanisms, it doesn’t look great. If you look at like how creatine– I don’t know much about this area, all these areas we’re listing now are just random shit I’m speculating about sometimes. I really want to– I’ve got to put that out there. There should be a separate category for my views on AI.Anyway, yes, looking at mechanisms, it doesn’t look that great. It would be surprising given what we currently know about biology for creatine supplementation to have this kind of cognitive effect. It’s possible and it’s not ruled out in vegetarians. The state in vegetarians is, I think, one inconclusive thing and there’s one really positive result. It seems just worth doing a reasonably powered check in vegetarians again.I would be very surprised if something happened, but I think it’s possible. Some people would be more surprised, some people are like obviously nothing, but I’m at the like, 5-10% seems like a reasonable bet."} {"prompt": "On the vegetarianism point, when I looked at that paper, it seemed like they had chosen vegetarians mostly just because they expected the effect to be larger there because it’s the case that creatine supplementation also increases, like free creatine in the body for meat eaters. Just to explain for listeners who don’t know, meat has some creatine in it, although a lot less than people tend to supplement with. Vegetarians seem to have less because they’re not eating meat. The supplementation eventually has a larger effect.", "completion": "Most likely that was just the choice that study made and then there was random variation where some studies– I’ve definitely updated more in the direction of their study is showing everything and it’s very, very easy to mess up studies or very, very easy to get not even just in the like 5% of the time you have results significant peak was .05, but just radically more often than that you get results that are wrong for God knows what reason.Anyway, so most likely that’s a study that happened to return a positive result since they happened to be studying vegetarians. That was the reason they did it. Seemed like it should have a larger effect. I think since we’ve gotten negative evidence about the effects and omnivores, it doesn’t seem that likely. Although that would also be consistent with them just being three times smaller and omnivores would be plausible and then it would be compatible with what we know."} {"prompt": "You were kind of, “Goddamn, this is really important but like people haven’t put money into it, people haven’t run enough replications of this.” You just decided to–", "completion": "One replication. It’s one pre-registered replication. That’s all I want."} {"prompt": "You were like, “I’m going to do it myself.” Talk about that for a minute?", "completion": "Well, I feel like in this case, providing funding is not the hard part, probably. I’m happy for stuff like this. I’m very interested in providing funding. I made a Facebook post like, “I’m really interesting providing funding” and then EA stepped up and was like, “I know a lab that might be interested in doing this.” They then put me in touch with them."} {"prompt": "When might they have results?", "completion": "In a year. I don’t know."} {"prompt": "Okay. Are you excited to find out?", "completion": "I am. Yes, I’m excited to see how things go."} {"prompt": "Yes, talk about the carbon dioxide one for a minute because this is one that’s also been driving me mad the last few months just to see that carbon dioxide potentially has enormous effects on people’s intelligence and in offices but you eventually just have extremely– And lecture halls especially just have potentially incredibly elevated CO2 levels that are dumbing us all down when we most need to be smart.", "completion": "Yes. I reviewed the literature a few years ago and I’ve only been paying a little bit of attention since then, but I think the current state of play is, there was one study with preposterously large effect sizes from carbon dioxide in which the methodology was put people in rooms, dump some gas into all the rooms. Some of the gases were very rich in carbon dioxide and the effect sizes were absurdly large.They were like, if you compare it to the levels of carbon dioxide that occur in my house or in the house I just moved out of, the most carbon dioxide-rich bedroom in that house had one standard deviation effect amongst Berkeley students on this test or something, which is absurd. That’s totally absurd. That’s almost certainly–"} {"prompt": "It’s such a large effect that you should expect that people, when they walk into a room with carbon dioxide which has elevated carbon dioxide levels, they should just feel like idiots at that point or they should feel like noticeably dumber in their own minds.", "completion": "Yes, you would think that. To be clear, the rooms that have levels that high, people can report it feels stuffy and so part of the reason that methodology and the papers like just dumping in carbon dioxide is to avoid like if you make a room naturally that CO2 rich, it’s going to also just be obvious that you’re in the intervention group instead of the control.Although to be fair, even if I don’t know, at that point, like even a placebo effect maybe will do something. I think almost certainly that seems wrong to me. Although maybe this is not a good thing to be saying publicly on a podcast. There’s a bunch of respected researchers on that paper. Anyway, it would be great to see a replication of that. There was subsequently replication with exactly the same design which also had p = 0.0001.Now, we’ve got the two precise replications with p = 0.0001. That’s where we’re at. Also the effects are stupidly large. So large. You really, really need to care about ventilation effects. This room probably is, this is madness. Well, this building is pretty well ventilated but still, we’re at least a third of a standard deviation dumber."} {"prompt": "Yes, I’m sure dear listeners you can hear us getting dumber over the course of this conversation as we fill this room with poison. Yes, I guess potentially the worst case would be in meeting rooms or boardrooms where people are having very long– Yes prolonged discussions about difficult issues. They’re just getting progressively dumber as the room fills up with carbon dioxide and it’s going to be more irritable as well.", "completion": "Yes, it would be pretty serious and I think that people have often cited this in attempts to improve ventilation, but I think people do not take it nearly as seriously as they would have if they believed it. Which I think is right because I think it’s almost certainly, the effect is not this large. If it was this large, you’d really want to know and then–"} {"prompt": "This is like lead poisoning or something?", "completion": "Yes, that’s right."} {"prompt": "Well, this has been enough to convince me to keep a window open whenever I’m sleeping. I really don’t like sleeping in a room that has no ventilation or no open door or window. Maybe I just shouldn’t worry because at night who really cares how smart I’m feeling while I’m dreaming?", "completion": "I don’t know what’s up. I also haven’t looked into it as much as maybe I should have. I would really just love to be able to stay away, it’s not that hard. The facts are large enough but it’s also short term enough to just like extremely easy to check. In some sense, it’s like “What are you asking for, there’s already been a replication”, though, I don’t know, the studies they use are with these cognitive batteries that are not great.If the effects are real you should be able to detect them in very– Basically with any instrument. At some point, I just want to see the effect myself. I want to actually see it happen and I want to see the people in the rooms."} {"prompt": "Seems like there’s a decent academic incentive to do this, you’d think, because you’d just end up being famous if you pioneer this issue that turns out to be extraordinarily important and then causes buildings to be redesigned. I don’t know, it could just be a big deal. I mean, even if you can’t profit from it in a financial sense, wouldn’t you just want the kudos for like identifying this massive unrealized problem?", "completion": "Yes, I mean to be clear, I think a bunch of people work on the problem and we do have– At this point there’s I think there’s the original– The things I’m aware of which is probably out of date now is the original paper, a direct replication and a conceptual replication all with big looking effects but all with slightly dicey instruments. The conceptual replication is funded by this group that works on ventilation unsurprisingly."} {"prompt": "Oh, that’s interesting.", "completion": "Big air quality. Yes, I think that probably the take of academics, insofar as there’s a formal consensus process in academia, I think it would be to the effect that this is real, it’s just that no one is behaving as if the effect of that size actually existed and I think they’re right to be skeptical of the process, in academia. I think that does make– The situation is a little bit complicated in terms of what you exactly get credit for.I think people that would get credit should be and rightfully would be the people who’ve been investigating it so far. This is sort of more like checking it out more for– Checking it out for people who are skeptical. Although everyone is implicitly skeptical given how much they don’t treat it like an emergency when carbon dioxide levels are high."} {"prompt": "Yes, including us right now. Well, kudos to you for funding that creatine thing. It would be good if more people took the initiative to really insist on funding replications for issues that seemed important where they’re getting neglected.", "completion": "Yes, I think a lot of it’s great– I feel like there are lots of good things for people to do. I feel like people are mostly at the bottleneck just like people who have the relevant kinds of expertise and interests. This is one category where I feel people could go far and I’m excited to see how that goes."} {"prompt": "Last year OpenAI published this blog post which got people really excited. Showing that there has been a huge increase in the amount of compute used to train cutting edge ML systems. I think for the algorithms that have absorbed the most compute, there was a 300,000 fold increase in the amount of compute that had gone into them over six years.It seemed like that’d been potentially a really big driver of more impressive AI capabilities over recent years. Would that imply faster progress going forward? Or did you think it will slow down as the increasing compute runs its course and gets harder and harder to throw more thermal processes at these problems?", "completion": "I think it just depends on what your prior perspective was. If you had a prior perspective where you were eyeballing progress in the field and being like, “Does this feel like a lot of progress?” Then in general, it should be bad news or not bad news. It should make you think AI is further away. Then you’re like, “Well there was a lot of progress.” I had some intuitive sense of how much progress that was.Now I’m learning with that rate of progress can’t be sustained that long or a substantial part of it has been this unscalable thing. You could talk about how much more you could go but maybe you had a million X over that period and you can have a further thousand X or something like that maybe 10,000 X."} {"prompt": "Well, I suppose there’s only so fast that process of getting faster and then also just the cost of buying tons of these things. People were able to ramp it up because previously it was only a small fraction of the total costs of their projects but I guess it’s now getting to be a pretty large fraction of the total cost of all of these AI projects in just buying enough processes.", "completion": "Yes, a lot of things have a large compute budget. It’s still normally going to be small compared to staff budget and you can go a little bit further than that, but it’s getting large and you should not expect, if you’re at the point where you’re training human-level AI systems that the cost of– Like the compute cost for this training run should be a significant fraction of global outputs.You could say maybe this trend could continue until you got up there. It’s probably not at this pace, it’s going to have to slow down a long time before it gets to like we are spending 2% of GDP on computers doing AI training. If you had that perspective we were eyeballing progress, then I think it should generally be an update towards longer timelines.I think if you had a perspective that this is more random, coming from where you’re like, “Man, it’s really hard to tell.” It’s very hard to eyeball progress and be like, “How impressive is this? How impressive is beating humans at chess or beating humans at Go or classify images as well?” To do this particular image classification task, I find it very hard to really eyeball that kind of progress and make a projection.I think if instead your estimates were coming from– Well, we think there is some more– We have some sketchy ways of estimating how much computing might be needed. We can make some analogy with the optimization done by evolution or by an extrapolation of training times or by arguments about other kinds of arguments about the human brain, which are really anchored to amounts of compute, then I think you might have a perspective that’s more like, “Well, this tells us something about, on paper, these arguments would have involved using large amounts of compute.”There’s a lot of engineering effort in that kind of scale-up. There’s a lot of genuine uncertainty, especially if you’re talking about moderate timelines of, “Will that engineering effort actually be invested and will that willingness to spend actually materialize?” I think that might make you move in the direction of like, “Yes, apparently, people are putting in the effort and engineering practices are reasonably brisk.”If instead, you were doing an estimate that was really driven by how much compute– This is the style of the old estimates futurists made. If you look at like, I mean Moravec. Like one of the earlier estimates of this flavor and Kurzweil has a very famous estimate of this flavor where they’re like, “It really matters like how much you compute you’re throwing at this task.”If you have that kind of view and then you see this compute spending is rising really rapidly, I guess that’s evidence that maybe it will continue to rise and therefore, it will be shorter than you would have thought."} {"prompt": "Some people seem to think that we may be able to create a general artificial intelligence just by using the algorithms that we have today, but waiting for another decade or two worth of processing power to come online, progress in the chips and just building that infrastructure. How realistic do you think that is? Is that a live possibility in your mind?", "completion": "I think it’s really hard to say, but it’s definitely a live possibility. I think a lot of people have an intuitive reaction– Some people have an intuition that’s very much “That’s obviously how it’s going to go.” I don’t think I sympathize with that intuition. Some people on the other side have an intuition, obviously, they’re really important things we don’t yet understand which will be difficult, so it’s hard to know how long they will take to develop, it’s going to be much longer the amount of time required to scale up computing.I also I’m not super sympathetic to that either. I kind of feel like it’s really hard to know, it seems possible. It’s hard to rule it out on a priori grounds. Our observations are pretty consistent with things being loosely driven by compute. If you think of it like, what is the trade-off rate between compute and progress, conceptual progress or algorithmic progress.I think our observations are pretty compatible with a lot of importance on compute, and also are compatible with the scale-up of existing things eventually getting you to– I guess that’s like definitely a view I have that eventually, enough scale-up will certainly almost certainly work. It’s just a question of how much and was that waiting to be seen over the next one or two decades, or is it like going to take you far past physical limits? Or, I’ll end up just pretty uncertain. I think a lot of things are possible."} {"prompt": "How does this question of the importance of compute relate to Moravec’s paradox? I guess, what is that for the audience of people who haven’t heard of it?", "completion": "This is the general observation. There are some tasks humans think of as being intellectually difficult. A classic example is playing chess, and there are other tasks that they don’t think would be computationally difficult, that are like picking up an object. Looking at a scene, seeing where the objects are, picking up an object, and then bringing it.It has seemed to be the case that the tasks that people think of as traditionally intellectually challenging were easier than people suspected relative to the task people thought of as not that intellectually demanding. It’s not super straightforward because there’s still certainly big chunks of intellectual inquiry that people have no idea how to automate it and I think that’s the general pattern."} {"prompt": "You mean for example, humans think of philosophy is difficult and it’s also hard for computers to do philosophy or they don’t seem to be beating us at that.", "completion": "Or mathematics or science. I guess people might often think to humans, it feels similar maybe to be doing mathematics and to be playing a really complicated board game, but to a machine, these tasks are not that similar."} {"prompt": "The board game is way easier.", "completion": "Board games it turned out was very, very easy relative to all the other things even for– At this point, Go is a reasonable guess for the hardest board game. It was much easier than it is for other tasks for humans to automate. Yes, I think in general part of what’s going on there is the reasoning humans have conscious access to is just not that computationally demanding. We have some understanding, and it is a part of the very early optimism about AI.We understand that when a human is consciously manipulating numbers or symbols or actually casting their attention to anything, they’re just not doing things that fast. A human is lucky if they can be doing 100 operations per second. That’s insane if a human is able to multiply numbers at that kind of speed that implies that or something. You’re like, “Wow, that’s incredible.”But when a human is doing, underneath that there’s this layer, which is using vastly, vastly more computation. In fact, a lot of the difficulty, especially if you’re in compute-centric world is when you look at the task, you say, “How hard is that task for humans relative to a machine?”A lot of the questions are like, “How well is a human leveraging all the computational capacity that they have when they’re doing that task?”For these tasks, any task that is involved in conscious reasoning, maybe it’s less likely, at least the conscious part is not doing anything computationally interesting. Then you have this further issue for things like board games, where it’s like a human is not under much selection pressure to use– A human has not really evolved to play board games well. They’re not using much compute in their brain very well at all. Best guess would be if you evolved like much, much tinier animals that are much much better at playing board games than humans."} {"prompt": "Is it not the case that the human brain just has a ridiculous fraction of itself devoted to visual processing that has just required a ton of compute and I guess also evolution to tease that part of the brain well.", "completion": "Yes. I don’t know off hand what the number is, but we’re talking about like the log scale, it just doesn’t even matter that much. It uses a reasonable– Vision uses a reasonable chunk of the brain and it’s extremely well optimized for it. It’s like when people play board games, they are probably leveraging some very large faction of their brain. Again, the main problem is like, the visual cortex is really optimized for doing vision well. They’re really using their brain for all that.Usually, the luckiest case when you’re doing mathematics or playing a game somehow has enough– Makes enough intuitive sense or maps on well enough intuitively, you can build up these abstractions to leverage the full power of your brain through that task. It’s pretty unusual. This is not obvious, a priori [inaudible 01:00:39] this is just an after the facts story. You could imagine that there are people who are actually able to use their entire machine of visual processing to play some board games. You can imagine that.I think that’s actually a live possibility. We talk about Go for example and we look at the way that we’ve now resolved Go. The amount of compute you would need to beat humans at Go using entirely a brute force strategy, using alpha-beta search or something, is a lot compared to your visual cortex or the individual system more broadly. You can make a plausible case that people are able to use a lot of that machinery– They are able to use a lot of machinery in playing Go and to a slightly lesser extent, chess for doing position evaluation, intuitions about how to play the game."} {"prompt": "You’re saying that you think the part of the brain that does visual processing is getting brought online to notice patterns in Go and is getting co opted to do the board game work.", "completion": "Yes, at least that’s possible and consistent with our observations of how hard it is to automate the game. We just don’t know very much. Lot’s of things are inconsistent with our observations."} {"prompt": "Do you hope to find out whether we’re constrained by compute or algorithmic progress?", "completion": "Yes. I generally think– In some sense it’s not going to be being constrained by one or the other, it’s going to be some marginal returns to each. What is the rate of substitution between more compute and more algorithmic progress? In general, I think it seems better from a long-term perspective, if it takes a lot of algorithmic progress to substitute for small amount of compute.The more you’re in that world, the more concentrated different actors compute needs are. They are probably building really powerful AI systems. Everyone who’s building them is going to have to use– You’re going to have to be paying attention. They’re going to be using a very large fraction of their computational resources and any actor who wants to develop very powerful AI will be also using a reasonable fraction of the world’s resources and that means that it is much easier to know who is in that game, it’s much harder for someone to unilaterally do something.It’s much easier for the players to be having a realistic chance of modern reinforcement and also just have a realistic chance of getting in a room and talking to each other. Probably not literally a room but reaching understanding and agreement. That’s one thing. Maybe the other thing which is harder is for algorithmic progress to substitute for hardware progress to slow the subsequent rate of progress is likely to be relative to what we’ve observed historically.If you’re in a world where it turns out that just clever thinking really can drive AI progress extremely rapidly and the problem is just that we haven’t had that much clever thinking to throw at the problem, you can really imagine as one skills up AI and is able to automate all that thinking, having a pretty fast ongoing progress which might mean there’s less time between when long-term alignment problems become obvious and start mattering and AI can start helping with them and the point where it’s catastrophic to have not resolved them.Generally if clever ideas can shorten that period a lot, it’s a little bit bad. It’s a little bit less likely that the automation, like the AI will have an incredible overnight effect on the rate of hardware progress and will also presumably accelerate it. Automation will help there as well but–"} {"prompt": "You think if compute is what predominantly matters, then it’s going to be a more gradual process. We’ll have longer between the point when machine learning gets, starts to get used for important things and we start noticing where they work and where they don’t work and when a lot of things are getting delegated to machine learning relative to the algorithmic case where it seems like you get like really quite abrupt changes in the capabilities.", "completion": "Yes, I think a lot of that. This could also change the nature of AI research. A lot of that is from hardware being this very immature industry with lots of resources being thrown at it and performance being really pretty well understood and it would be hard to double investment in that and also it’s not that sensitive, it’s a weird question about quality of human capital or something. You just sort of understand it. You have to do a lot of experimentation. It’s relatively capital intensive."} {"prompt": "There’s quite big lags as well.", "completion": "Yes. It just seems like generally it would be more stable. Sounds like good news. This is one of the reasons one might give for being more excited about faster AI progress now. You might think that probably the biggest reason to be excited is like, if you have faster AI progress now, you’re in the regime where we’re using– If you manage to get some frontier, we’re using all the available competition as well as you could then subsequent progress can be a little more stable.If you have less AI progress now and at some point, people only really start investing a bunch once it becomes clear they can automate a bunch of human labor, then you have this more whiplash effect where you’d have a bust of progress as people really start investing."} {"prompt": "A few weeks ago, we published our conversation with Pushmeet Kohli who’s an AI robustness and reliability researcher at DeepMind over in London. I guess to heavily summarize Pushmeet’s views, I think he might’ve made a couple of key claims.One was that alignment and robustness issues and his view appear everywhere throughout the development of machine learning systems, so they require some degree of attention from everyone who’s working in the field and according to Pushmeet, this makes the distinction between safety research and non-safety research somewhat vague and blurry and he thinks people who are working on capabilities are also helping with safety and improving reliability also improves capabilities for you because you can then you can actually design algorithms that do what you want.Secondly, I think he thought that an important part of reliability and robustness is going to be trying to faithfully communicate our desires to machine learning algorithms and this is analogous, although a harder instance of the challenge of just communicating with other people, getting them to really understand what we mean. Although of course it’s easy to do that with other humans than with other animals or machine learning algorithms.A third point, was, I guess just a general sense of optimism that DeepMind is working on this issue quite a lot and are keen to hire more people to work on these problems and I guess they sense that probably we’re going to be able to gradually fix these problems with AI alignment as we go along and machine learning algorithms, will get more influential. I know you haven’t had a chance to listen to the whole interview, but you skimmed over the transcript. Firstly, where do you think Pushmeet is getting things right? Where do you agree?", "completion": "I certainly agree that there’s this tight linkage between getting AI systems to do what we want and making them more capable. I agree with the basic optimism that people will need to address in getting assistance to tackle this ‘do what we want ‘problem. I think it is more likely than not that people will have a good solution to that problem. I think even if you didn’t have sort of long termist, maybe there’s this interesting intervention of, “Should long termists be thinking about that problem in order to increase the probability?”I think even if the actions of the long termist are absent, there’s a reasonably good chance that everything would just be totally fine. In that sense, I’m on board with those claims, definitely. I think that I would disagree a little bit in thinking that there is a meaningful distinction between activities whose main effect is to change the date by which various things become possible activities, whose main effect is to like change the trajectory of development.I think that’s the main distinguishing feature of working on alignment, per se. You care about this differential progress towards being able to build systems the way we want. I think in that perspective, it is the case like the average contribution of AI work is almost by definition zero on that front, because it’s bringing the entire– If you just increased all the AI work by a unit, you’re just bringing everything forward by one unit.I think that doesn’t mean there’s like this well-defined theme which is, “Can we change the trajectory in any way?” and that’s an important problem to think about. I think there’s also a really important distinction between the failure which is most likely to disrupt like the long term trajectory of civilization and the failure which is most likely to be an immediate deal breaker for systems actually being useful or producing money and maybe one way to get at that distinction is related to the second point you mentioned.Communicating your goals to an ML system is very similar to communicating with a human. I think there is a hard problem of communicating your goals to an ML system which we can view as a capabilities problem. Are they able to understand things people say? Are they able to form the internal model that would let them understand what I want or understand– In some sense, it’s very similar to the problem of predicting what Paul would do or it’s a little slice of that problem, like briefing under what conditions Paul would be happy with what you’ve done.That’s most of what we’re dealing with when we’re communicating with someone. We’d be totally happy. If I’m talking with you, I would be like completely happy if I just managed to give you a perfect model of me then the problem is solved. I think that’s a really important AI difficulty for making AI systems actually useful. I think that’s less core to the– That’s less the kind of thing that could end up pushing us in a bad, long-term direction mostly because we’re concerned about the case– We’re concerned about behavior as AI systems become very capable and have a very good understanding of the world around them, of the people they’re interacting with and the really concerning cases are ones where AI systems actually understand quite well, what people would do under various conditions, understand quite well like what they want we think about as normal communication problems between people that are not motivated to act in sort of understand what Paul wants but aren’t trying to help Paul get what he wants and I think that a lot of the interesting difficulty, especially from a very long-term perspective is really making sure that no gaps opens up there.Again, the gap between the problems that are most important in the very long run perspective and the problems that people will most be confronting in order to make AI systems economically valuable, I do think that there’s a lot of overlap, that there are both of these problems that people are working on that make AI systems more valuable and also helping very directly with the long run outcome.I think if you’re interested in differentially changing the trajectory or improving the probability of things going well over the long term, you’re more inclined to focus precisely on those problems, which won’t be essential for making AI systems economically useful in the short term and I think that’s really distinctive to what your motivation is or why you– How you’re picking problems or prioritizing problems."} {"prompt": "Sounds like you’re a little bit more skeptical of that or do you think ideally people should in the medium term be aiming to work on things that seem they disproportionately push on robustness and reliability?", "completion": "Yes, I think people who are mostly concerned about the long term trajectory, they face this dilemma in every domain where if you live in the world where you think that almost all of humanity’s problems, almost all of the most serious challenges to humanity are caused by things humans are doing or by things not only the humans are doing, but by things humans are doing that we would often think of as part of productive progress, part of the goal.We’re building new technologies but those technologies are also the things that pose the main risks. Then you have to be picky if you’re a person who wants to change the long term trajectory just because the average– It’s just sort of like I probably am helping address those problems if I go to work and I just go do a random thing. I go work on a random project. Make a random product better.I am helping address the kinds of problems we’re concerned about but I’m also at the same time contributing to bringing those problems closer to us in time. It’s sort of like roughly awash. If you’re on the average product, making the average product work and there are subtle distinctions we could make of like– I think if you are motivated to make products work well, if you’re like, “Not only do I want to do the thing that’s most economically valuable, I want to have more of an emphasis on making this product robust.” I think you’re just generally going to make a bunch of low level decisions that will be helpful. I definitely think that there’s a big– You’re going to have a pretty big impact by being fussy about which problems you work on."} {"prompt": "I guess there’s this open question of whether we should be happy if AI progress across the board just goes faster. What if yes, we can just speed up the whole thing by 20%. Both all of the safety and capabilities. As far as I understand there’s kind of no consensus on this. People vary quite a bit on how pleased they’d be to see everything speed up in proportion.", "completion": "Yes. I think that’s right. I think my take which is a reasonably common take, is it doesn’t matter that much from an alignment perspective. Mostly, it will just accelerate the time at which everything happens and there’s some second-order terms that are really hard to reason about like, “How good is it to have more computing hardware available?” Or “How good is it for there to be more or less kinds of other political change happening in the world prior to the development of powerful AI systems?”There’s these higher order questions where people are very uncertain of whether that’s good or bad but I guess my take would be the net effect there is kind of small and the main thing is I think accelerating AI matters much more on the like next 100 years perspective. If you care about welfare of people and animals over the next 100 years, then acceleration of AI looks reasonably good.I think that’s like the main upside. The main upside of faster AI progress is that people are going to be happy over the short term. I think if we care about the long term, it is roughly awash and people could debate whether it’s slightly positive or slightly negative and mostly it’s just accelerating where we’re going."} {"prompt": "Yes, this has been one of the trickier questions that we’ve tried to answer and in terms of giving people concrete career advice. It seems to me if you’re someone who has done a PhD in ML or is very good at ML, but you currently can’t get a position that seems especially safety-focused or is going to disproportionately affect safety more than capabilities. It is probably still good to take a job that just advances AI in general mostly because you’ll be right in the cutting edge potentially of what’s going on, and improving your career capital a lot and having like relevant understanding of the key issues.The work I guess you think is close to awash. It speeds things up a little bit, everything goes in proportion. It’s not clear whether that’s good or bad but then you can potentially later on go and work on something that’s more alignment specific and that is the dominant term the equation. Does that seem reasonable?", "completion": "Yes. I think that seems basically right to me. I think there’s some intuitive hesitation with the family of advice that’s like, “You should do this thing, which we think is roughly awash on your values now but there will be some opportunity in the future where you can sort of make a call.” I think there’s some intuitive hesitation about that, but I think that is roughly right. Imagine if you offered Paul, if there were two possible worlds.In one there’s twice as many people working on machine learning and AI but half of them really care about the long term and ensuring that AI has developed in a way that’s good for humanity’s long term and that sounds like a good trade. We maybe then have less chance, less opportunity to do work right now. I think that’s the main negative thing. There will be less time to think about the alignment problem per se but on the other hand, it seems really good if a large fraction of the field really cares about making things go well.I just expect a field that has that character to be much more likely to handle issues in a way that’s good for the long term. I think you can scale that down. It’s easiest for me to imagine the case that for a significant fraction of the field, it is like that, but I think that if anything, like the marginal people at the beginning are having a probably larger, a better cost-benefit analysis for them."} {"prompt": "I was suggesting that this would be the thing to do if you couldn’t get a job that was like AI alignment specific already. Say that they want to join your team but they’re like just not quite good enough yet, they need to learn more potentially. There’s just only so fast that the team can grow. Even though they’re good, you just can’t hire as quickly as people are coming on board but I suppose you have to make sure if people go into these roles that we think are currently kind of just neutral but good for improving their skills that they don’t forget about that. That the original plan was at some point to switch to something different.There’s a bit of a trap . It seems people just in general, they tend to get stuck in doing what they’re doing now and convince themselves that whatever they’re doing is actually really useful so you might think, “Yes, it would be good to go and then switch out but they might have some doubts about whether in fact he will follow through on that.”", "completion": "Yes, I think that’s right. It would be an even happier world certainly if you took those half of people who might’ve gone into ML, and you’d instead moved them all into really thinking deeply about the long term and how to make things go well. That sounds like an even better world still. It seems to be pretty good if you really trusted someone to– If someone really cared about the long term, you’re like, “What should I do.” It’s a reasonably good option to just be like, “Go do this thing which is good on the short term and adjacent to an area we think is going to be really important over the long term.”"} {"prompt": "There’s been this argument over the years that it would just be good in some way that we can’t yet anticipate to have people at the cutting edge of machine learning research who are concerned about the long term and alert to safety issues and alert to alignment issues that could play out or could have effects on the very long term. People have gone back and forth on how useful that actually would be to just be in the room where the decisions are getting made.It just occurred to me that it seems the machine learning community is really moving in the direction of sharing the views that you and I hold. A lot of people are just becoming concerned about “Will AI be aligned in the long term?” It might be that if you’re particularly concerned about that now, then maybe that makes you different from your peers right now, but in 10 years’ time or 20 years’ time everyone we’ll have converged on a similar vision as we have a better idea of what machine learning actually looks like and what the risks are when it’s deployed.", "completion": "Yes, I think that’s an interesting question or an interesting possible concern with that approach. I guess my take would be that there are some I don’t know if you’d call them values differences or deep empirical or worldview differences that are relevant here where I think to the extent that we’re currently thinking about problemsthat are going to become real problems, it’s going to be like much, much more obvious there are real problems.I think that to the extent that some of the problems we think about over the very long term are already obviously problems, people in the ML community are very interested in problems that are obviously problems. Or problems that are affecting the behavior of systems today. Again, if these problems are real, that’s going to become more and more the case over time and some people will become more and more interested in those problems.I still think there are likely to be– There is this question of how much are you interested in making the long term go well versus how much are you doing your job or pursuing something which has a positive impact over the short term, or that you’re passionate about or interested in this other non-long term impact of. I do think there’s just continuously going to be some calls to be made or some different decisions. The field embodies some set of values.I think that people’s empirical views are changing more than the set of implicit values that they have. I think if you just said everyone who really cares about the long term isn’t going into this area, then the overall orientation the field will persistently be different."} {"prompt": "Do you have any views on the particular technical approaches that Pushmeet mentioned in the episode or that the DeepMind folks have written up on their safety blog?", "completion": "The stuff I’m most familiar with that Pushmeet’s group is working on is verification for robustness to perturbations. Some work on verification and more broadly and some work on adversarial training and testing. Maybe those are the three things, I don’t know if there’s something else. I’m happy to go through those in order."} {"prompt": "Yes, go through those.", "completion": "I guess I’m generally pretty psyched about adversarial testing and training and verification. That is, I think there is this really important problem over both– This is one of those things at the intersection of like it matters over the short term. I think maybe matters even more over the very long term of like you have some AI system, you want to delegate a bunch of work to maybe not just one but a whole bunch of AI systems.If they fail catastrophically, it would be really irrecoverably bad. You can’t really rule out that case with traditional ML training because you’re just going to try a thing out a bunch of cases that you’ve generated so far, experienced so far. You’re really not going to be getting–Your training processes aren’t at all constraining. There’s potential catastrophic failure in any situation that comes up.We just want to have something, we want to change the ML training process to respect– To have some information about what constitutes a catastrophic failure and then not do that. I think that’s a problem that is in common between the short and long term. I think it matters a lot on the long term. It’s a little bit hard to say whether it’s more on the long term or short term, but I care more about the long term..I think that the main approaches we have to that are these– The three I really think about are adversarial training and testing, verification and interpretability or transparency. I just think people getting familiar with those techniques, becoming good at them, thinking about how you would apply them to richer kinds of specifications, how you grapple with these fundamental limitations and adversarial training where you’re like you have to rely on the adversary to think of a kind of case.The way the technique works in general is like, “I’m concerned about my system failing in the future. I’m going to have an adversary who’s going to generate some possible situations under which the system might fail. Then we’re going to run on those and see if it fails catastrophically.” You have this fundamental limitation where adversaries aren’t going to think of everything.It’s like people who are just getting experience, how do we grapple with that limitation? In some sense, verification is like a response to that limitation maybe the space between or when– I think it’s productive to have people thinking about both of verification and the limits of verification and testing and limits of testing. Overall I’m pretty excited about all of that."} {"prompt": "Do you share Pushmeet’s general optimism?", "completion": "I don’t know quantitatively exactly how optimistic he is. My guess would be that I’m less optimistic in the sense that I’m like “Well, there’s like 10% chance that we will mess this up and lose the majority of the value of the future.” Whereas that’s not I get when listening to him, it’s not the overall sense I get of where he’s at. It’s a little bit hard to know how to translate between a vibe and an actual level of optimism."} {"prompt": "Yes, it is interesting. Someone can think there is a 20% chance that we’ll totally destroy everything, but still just a kind of a cheerful disposition. [laughs] I came across this. Well, things go well. Among people working on existential risks and global catastrophic risks, and I guess AI in particular, there’s this trade-off between not wanting to do things that other people disagree with or aren’t enthusiastic about, and at the same time not wanting to have a field that’s so conservative but that there are no experiments done unless there is a consensus behind them. Do you think people are too inclined to make ‘unilateralist curse type’ mistakes or not trying things enough?", "completion": "I think my answer to this probably varies depending on the area. For reference, I think the policy you want to follow is: is the update on the fact that no one else wants to do this thing and then take that really seriously, engage with it a lot before deciding whether you want to do it. Ideally that’s going to involve engaging with the people who’ve made that decision to understand where they’re coming from.I think I don’t have a very strong general sense of whether we’re more likely to make one mistake or the other. I think I’d expect the world systematically to make too much of the sort of thing can be done unilaterally so it gets done. In the context of this field, I don’t know if there are as many– Yes, I guess I don’t feel bad. I don’t feel super concerned about either failure mode. Maybe I don’t feel that bad about where people are at."} {"prompt": "The vibe I get in general from the AI policy and strategy people is that they are pretty cautious, quite cautious about what they say and what they do. I guess that’s been a deliberate decision, but I do sometimes wonder whether that’s swung too far in favor of not speaking out enough about their views?", "completion": "Yes, I guess there are certainly people who have taken, and it’s a diversity of what people do, which I guess is the whole problem. I guess there are definitely people who take a very cautious perspective."} {"prompt": "I think that they sometimes get a bit cut out of the public discussion, because they’re just not inclined to speak out, which can be a loss at times.", "completion": "Yes, I definitely think it seems you have a real problem, and you think that a positive part of your channel for impact is communicating your views, but then are very hung up on that or take a strong, shouldn’t communicate views because of unilateralist concerns. I guess in general, on the family of unilateralist concerns I’m least sympathetic to, is probably one intervention is to talk seriously about what kinds of mechanisms might be in place, and how we might respond, if it turned out that AI alignment was hard or if AI progress was rapid. That’s probably the place I’m least, overall sympathetic but I think the cost-benefit looks pretty good on that discussion."} {"prompt": "Could you say you default towards just on most issues, just even if you’re not taking action, express your true views?", "completion": "Or at least to the extent there’s useful collaborative cognitive work of thinking about what should be done, how would we respond or what would happen. Being willing to engage in that work as a community, rather than people thinking in private. Maybe taking some care to be well, you don’t want to look say inflammatory stuff, you don’t want to get people really upset but you can be reasonable about it. I guess it’s not really my view is so much, I don’t care one way or the other. It’s more I’m ambivalent, I don’t think it’s obvious that there’s a big error one way or the other."} {"prompt": "All right. Let’s talk about something pretty different which is you recently wrote a post about why divesting from companies that do harmful things could, in moderation, actually be quite an effective way to improve the world. That’s in contrast to what most people who’ve looked into that as part under the rubric of effective altruism have tended to conclude, which is that it’s actually not that useful because if you sell shares in a company or don’t lend money to a company, then someone else will just take your place, and you haven’t really made any difference.Do you want to explain the mechanism by which divesting from harmful companies, I guess like cigarette companies, for example, could be useful.", "completion": "Yes, I think there’s two important things to say upfront. One is that I’m mostly thinking about the ratio of costs to benefits. You can end up for some companies in a regime where divestment has relatively little effect, but is also quite cheap. In general, I think the first epsilon divestment will tend to be literally free, the cost of second-order in terms of how far you divest in the benefits or first order. That is almost always going to be worth it to divest at least by that epsilon.That’s the first part of the picture. This can be mostly a story about the costs being very, very low, rather than benefits being large, and if the costs are very, very low, then it’s mostly an issue of having to do the analysis and having to deal with the logistics in which case, I think it is plausible that one should– You can imagine really getting those costs down if someone both did the research and actually produced a fund.I could imagine myself personally being sure, I will put 0.1% of my wealth in some fund. There’s just this roughly market-neutral thing that shorts all the companies I really don’t like the activities of and goes long on most of those correlated companies. That’s one thing, it might just be about the costs and benefits both being small, such as it’s not going to be a big deal for an individual investor is maybe not worth thinking that much about it, but if someone was willing to produce a product that could be scaled a lot, and everyone could just very quickly or very easily buy the fund, then they might do that.Maybe the second thing in terms of how it could actually be possible or why it isn’t literally completely offset. I think the rough mechanism is when I get out of a company, like let’s suppose I care about oil and I divest from companies that are producing oil that increases as I divest more. The whole way that does good is by increasing the expected returns to investment in oil companies. The concern is other investors will just buy continue putting more money into oil companies until the expected returns have fallen to market returns because otherwise why not just keep putting more money in. The thing that simplified picture misses is that there is idiosyncratic risk in the oil industry namely as oil becomes a larger and larger part of my portfolio, more and more of the volatility of my portfolio is driven not by what is overall going on in the market which is the composite of many sectors, just volatility in oil in particular.If I try and go overweight, like 10% oil, like there was a lot of divestment and people had to go overweight by 10% to offset it, they would actually be significantly increasing the riskiness of marginal oil investments. The returns that they would demand in order to offset that risk would also go up. There’s two things. One is like it actually does require doing a kind of, well, it depends a little bit how rational I believe investors are. In some sense, the divestment story, like the pessimism already relied on rational investors, so it’s maybe more reasonable to say, let’s actually dig in and see how rational investors would respond and do those calculations. That’s like yes, that’s my perspective. I think it’s unusually reasonable to look into that when pessimism is coming from this home economic model.Once you’re doing that, then there are two questions. One is just qualitatively, how large are the effects that we’re talking about and it’s something I tried to run through in this blog post and a thing that I was surprised how large they were when I thought about a little bit.Maybe the second observation is that actually there’s this cancellation that occurs where roughly speaking if the oil industry, it has no idiosyncratic risk or has very low idiosyncratic risk, your divestment will get almost entirely offset but at the same time it has almost no cost to you because the industry has almost no access returns because those returns would be tied to the idiosyncratic risk.You end up with actually the fact that the cost-effectiveness doesn’t depend. This parameter which governs like how much your divestment is going to get offset, that cost-effectiveness doesn’t actually depend, or like the ratio between costs and benefits doesn’t depend on that parameter because it affects both costs and benefits equally. It affects like what is the overall upside to organizing this fund like how much will get offset but it doesn’t affect like how attractive is it for an individual investor. So like I think, like yes, we could go into the details and talk about how much it makes sense to divest. It might often make sense to divest completely or maybe even go like 100% or 200% short.In industries you don’t like, it’s going to become better, like an example of divestment that might be particularly cost-effective is like suppose there’s two companies who are producing very similar products and so are very correlated like maybe two companies that both produce poultry and one of them has substantially worse animal welfare practices.You might think there’s a lot of the risk in animal agriculture, in general, which is going to be experienced equally by those two companies. So if you’re long the company you like and short the company you dislike that has relatively little risk and it still has idiosyncratic risks specific to those companies and there’s a complicated analysis there, that like you can end up with relatively little risk compared to how much effect you have on capital availability for the two companies. We can actually talk about the mechanism by which this causes less bad stuff to happen in the world. Here we’re really just talking about why I’m skeptical of the skepticism."} {"prompt": "Yes. Let’s set that aside for a minute. Just to explain this in really simple language. The previous thinking has been that if you sell shares in a company then someone else who just doesn’t care about the moral issues to do with raising chickens or producing oil, they’re just going to swoop in and buy it at the same price and the share price won’t be changed or the amount of money that the company can borrow won’t really be changed. The thing that misses, is that if a decent number of people or even like a small number of people stop buying oil stocks then say, like, rich investment funds that don’t care about the moral issues, for them to go and buy even more of these fossil fuel companies that they don’t want to do that or their willingness to do that isn’t unlimited because they want to be diversified across all of the different assets in the world. In order to buy extra oil shares to make up for the fact that you and I don’t want to own them, they have to reduce the diversification that they have which is unappealing to them.If a bunch of people or even just a few short or sell these shares, it actually probably will suppress their price a little bit because people will have to be compensated for the reduced diversification with a lower share price to make it more appealing to buy. That’s one thing. Also, while that effect might be pretty small in the scheme of things, it’s also the case that just like selling those first few shares of those oil companies, it wasn’t important to you to own those specific companies anyway. It’s just like you can like slightly reduce your diversification. Just sell tiny amounts of these companies that you own in your portfolio, which costs you practically nothing. Even though the benefit is quite small, the costs could potentially be even smaller because this just doesn’t matter that much. Then the ratio of benefits to costs could be pretty large even if like this is not the best way to have an impact in the world.", "completion": "Yes. That’s right. If you want to think about what the total impact is like, it’s reasonable to imagine scaling this up to large numbers of investors doing it like a lot of the effects are going to be roughly linear in the relevant range. The total impacts are not that bad. They don’t look great but they look like if you imagine a change where everyone like large factions of people divested I think it would meaningfully decrease the amount of oil that gets extracted or the number of chickens raised in captivity, especially in cases where you have– I think maybe the oil case is a little bit unfavorable in this way compared to the chicken case where you could really imagine slightly shifting towards more practices of better animal welfare. Yes. You can imagine slightly shifting towards practices that are better for animal welfare or towards different kinds of meat or so on.The total effect is probably not that big. The total effect may still be large enough to justify really getting the logistics sorted out. It’s very easy to do. It’s from the investor’s perspective other than the hassle of doing it, I think it’s actually pretty definitely the first unit is a very good deal."} {"prompt": "Well, couldn’t you just buy into an investment fund that doesn’t own oil companies or doesn’t own animal agricultural companies? That seems like the first part, that’s pretty straightforward to do.", "completion": "Yes. Also, I’m thinking then, so when I buy a fund there’s a bunch of things that constrain my choices and it’s kind of annoying if now I have this extra constraint on top of those that might be a reason and–"} {"prompt": "It’s not quite worth it?", "completion": "Yes. Even if you slightly raise management fees on that fund. So like Vanguard is going to offer me some tinsy–"} {"prompt": "0.074.", "completion": "Yes. Now I have to pay 0.1% on the same thing. That’s no good. I would normally imagine my baseline implementation would be a fund that shorts the relevant, the particular companies you care about and maybe also opens up the offsetting rates. The reason it was bad to sell this company is because we were losing diversification so they can try to do things to offset those costs as part of the same bundle. I would be very interested just seeing that there’s the optimal divestment fund for people who care about animal welfare or whatever. That just holds mostly if there’s really large short positions in the company that have the worst animal welfare effects.Then also construct a portfolio to capture as much as possible the diversification benefits that those would have added to your portfolio. The cost of investing in that can be pretty low and you can just then put that on top of– do whatever else you would have done in investing, then take 0.1% of your money or whatever, 1% of your money and put it in this fund. On average that fund is going to make zero dollars and it’s going to have some risks. The cost to you is just the risk of this fund, that on average, is making no money but it could be relatively- if you’re not investing that much of your money in it, the risk is just not that bad."} {"prompt": "To what extent, if it is, is this analogous to being useful to not go and work at an evil company?", "completion": "I think it is fairly analogous. There’s a bunch of quantitative parameters. If you take a certain economic perspective they’re very structurally analogous. The discussion we’re having about risk, which is important to determine some of the relevant elasticities is quite different from the analogous discussion in the case of working in a problematic industry. I think the overall thing is kind of similar. Well, if you don’t work in that industry overall what happens is that prices go up or wages go up a little bit in the industry and that induces more people to enter. We just have to talk about how much do wages go up.One thing I sort of think about is if we consider say animal agriculture. It’s also kind of analogous to the discussion with ethical consumption. I think that’s actually a really good comparison point for divestments. Where you could say, “I want to consume fewer animal products in order to decrease the number of animals we get produced.” Then you have a very similar discussion about what are the relative elasticities are like. One way you could think about it is if you decrease demand by 1%, you decrease labor force by 1% and you decrease the availability of capital by 1%. If you did all of those things then you would kind of decrease the total amount produced by 1% roughly under some assumptions about how natural resources work and so on.The credit for that 1% decrease is somehow divided up across the various factors on the supply side and demand side and the elasticities determine how it is divided up. I think it’s not like 100% consumption or like 100% of labor, I think all of those factors are participating to a nontrivial extent.The comparison to ethical consumption I think looks reasonably good. I think under pretty plausible assumptions you’re getting more bang for your buck from divesting from– I haven’t done this analysis really carefully and I think it would be a really interesting thing to do and would be a good motivation if I wanted to put together the animal welfare divestment fund. I think under pretty plausible assumptions you’re getting a lot more bang for your buck from the divestment than from the consumption choices. Probably you’d still want like the consumption– the investment thing would be relatively small compared to your total consumption pattern. It wouldn’t be like replacing your ethical consumption choice. If ethical consumption was a good idea, then also like at least totally divesting and maybe even 10X leveraged short positions. When you would have bought one dollar of animal agricultural companies, instead you sell $10. I think stuff like that could be justified if you thought that ethical consumption was a good thing."} {"prompt": "Do you just want to map out, or sketch out briefly for those who are skeptical, how it is that selling shares in a company or selling bonds in a company reduces the output of that company?", "completion": "Yes. I think the bond case is a little bit simple to think about though I probably think they’re probably about the same. Let’s talk about the bond case. This is the company Tyson wants to raise a dollar. They’ll go out to investors and say, “Give us a dollar now and we’ll give you some amount of money 10 years from now assuming we’re still solvent.” That’s their pitch. They’re selling these pieces of paper to people which are like ‘I owe yous’. The price of those ‘I owe yous’ or how much the ‘I owes yous’ have to be forged are set by supply and demand amongst investors.What happens when you short? The bond is– someone came to this company and wanted to loan them a dollar and they’re saying, “Don’t loan them a dollar. Instead, loan me a dollar, and whatever it is that they pay back to their bondholders, I’ll pay it back to you instead.” They’re like, “Fine. I’m just as happy to lend to you as I was to lend to the actual company.” Now the company has one less dollar. Now the company’s like, “We still need to raise that dollar if we want to produce this additional marginal check-in.” Now the company goes and tries to raise the dollar but they’ve used up one of the willing buyers. Now they need to find another buyer, someone who’s willing to loan them this dollar.That person is going to be a little bit less excited because, again, their portfolio is a little bit more overweight in this company, so they’re a little bit more scared about the risk of this company going under. Roughly speaking, that’s the mechanism."} {"prompt": "I think it makes sense. You imagine what, if just a lot of people weren’t willing to lend money to a company with significant number and this drives up their borrowing cost, and so the company shrinks because they have to pay a higher interest rate. They can’t get as much capital. It kind of makes sense on an intuitive level. Some of this gets a little bit technical, so we’ll stick up a link to the blog post that you write with all the equations, explaining how you worked this through and try to estimate the size of the benefits and the costs.", "completion": "I’m concerned it’s not the most careful or clear analysis to people. I think I’m interested and I think at some point, will have a more careful version that I put up. Just a fun exercise for me."} {"prompt": "You make some points that I haven’t seen anywhere else, and that actually might shift the conclusion. That seems like probably the most important thing that people need to take onboard.", "completion": "That’d be super interesting to me if I actually ended up with the divestment fund that was reasonably the long-short fund that was recently constructed and cost-effective. That would be kind of cool. Also, sorry. I said they’re like compared with ethical consumption. I think one thing I want to stress there, is that the way that work is getting done is just because of the slight changes on the margin being very effective. It’s very similar to vegetarians. If you just stop eating meat in cases where it was really marginal, that has a lot more bang for your buck than if you go all the way.It’s the same thing here. It’s not going to be competitive with the first unit of stopping eating meat. It’s going to be competitive with going all the way to the last bits."} {"prompt": "It’s going to be a little bit embarrassing if effective altruist aligned folks have been saying divestment is a waste of time for all these years, and it turns out that we’re pretty wrong about that. [laughs] Going to have to eat humble pie. I suppose it also looks good that we’re updating our views so we’re not just stuck with dogmatic positions.", "completion": "I think we also most likely end up with some compromise where we’re like, look, the impacts are a lot smaller than people will often implicitly assume when they are pitching this. It is, on balance, like a reasonable thing to do. Maybe we shouldn’t have been quite so down on it."} {"prompt": "The costs are negligible a lot of the time.", "completion": "Or it is really a social thing of the cost of just people believing. The only difficulty to make it happen is people believing they should it. Therefore, it’s like a reasonable– If there’s a change people can make that costs them almost nothing, it’s a particularly reasonable thing to advocate for people to do."} {"prompt": "Let’s talk about s-risks for a minute. Some listeners will know, but some people won’t, that s-risks is this term people have settled on to describe possible future scenarios that are just neutral where humans go extinct and then there’s nothing, or it’s not very good, where humans stick around but then we just don’t make the world as good as it could be, but rather worlds where there’s astronomical levels of bad things. I guess S, in this case, stands for suffering because a lot of people tend to be concerned that their future might contain a lot of suffering.It could also just include any future that is large in the sense that a lot of stuff is going on, but it also contains a lot of bad stuff rather than good stuff. Some of the ways that people are aware this could happen involved in artificial intelligence that doesn’t share our goals. What’s your overall take on s-risk as a problem to work on?", "completion": "I think my best guess is that if you go out into a universe and optimize everything as being good. The total level of goodness delivered is commensurate with the total amount of badness that will be delivered if you went out into the universe and optimize it for things being bad. I think to the extent that one has that empirical view or like that maybe moral, some combination of empirical view and moral view valid the nature of what is good and what is bad, then S-risks are not particularly concerning because people are so much more likely to be optimizing the universe for good.So much more in expectation, the stuff in the universe is optimized for exactly what Paul wants rather than exactly what Paul doesn’t want. That’s my best guess view. On my best guess view, I think this is not a big concern. I think I do have considerable moral uncertainty. I guess the way that I would approach moral uncertainty, in general, would say that even if an expectation, it’s sort of hard to talk about it in comparing the expectations of outcomes across these very different moral views. This is one of the cases that gets the comparison that’s difficult because of those weird difficulties with inter-theoretic utility comparisons.The way I would normally think about this kind of case is to say I should put reasonable priority or reasonable interest in reducing S-risks, if I put a reasonable probability on views on which the total amount of possible badness is much larger than the total amount of possible goodness which is where I’m at.I think it’s not likely, but plausible combinations of empirical and moral views on which they’re very important. That’s my starting point of taking this as a thing which I’m not going to put much on because I don’t find that perspective typically appealing when it’s not going to be a large fraction of my total concern. It deserves some concern because it’s a plausible perspective."} {"prompt": "The naive take on this might be, “Why would we worry about these scenarios because they seem really outlandish? Why would anyone set out to fill the universe with things that are really bad? That just seems like a very odd thing to do.” Once you’re at the level of sophistication where, yes, you can go out and colonize space and great astronomical massive stuff, why are you filling it with stuff that’s bad.Then there’s something to be said for that but then people try to think about scenarios in which this might happen, which might involve conflicts between different groups were one of them threatens the other that they’re going to do something bad and then they follow through in doing that, or potentially where we don’t realize that we’re creating something that’s bad.You might create something that has a lot of good in it, but also has a bunch of bad in it as well and you go out and spread that. We just don’t realize that we’re also as a side effect to creating a bunch of suffering or some other disvalue. How plausible do you think any of these scenarios are?", "completion": "I guess the one that seems by far most plausible to me is this conflicts, threats and following through on threats model."} {"prompt": "Not just moral error potentially?", "completion": "I think it’s hard to make a sufficiently extreme moral error. There might be moral error that’s combined with threats they could follow through on but I think it’s hard for me to get- the risk from that is larger than the risk for me getting what is good, very nearly exactly backwards. It’s not totally impossible to get things exactly backwards. It’s more likely than hitting some random points in the space for a wide variety of reasons but I think it’s still a minority of my total concern. Most of it comes from someone wanting to just destroy shit because they wanted to have a threat of destruction or destroying value, so that’s what I would mostly be worried about."} {"prompt": "How plausible is it? It seems like you think it’s conceivable but pretty unlikely, so it’s like you pay a little bit of attention to it, but it’s not going to be a big focus. Is that kind of the bottom line?", "completion": "Yes, so when I was talking before about these comparisons, how being conceivable means it gets a little bit of priority. That was more with respect to sort of moral views or aggregation across different values and how much weight I give them?Well, I think the key question is how much credence do you place on views where the worst outcomes are much more bad than the best outcomes are good and then I think that those views basically are going to recommend if the ratio was large enough, just focusing entirely on minimizing this risk of really bad stuff. I think regardless of one’s empirical view, it’s worth to put in some amount of attention to reducing the risk of really bad stuff.In terms of how plausible it is, that’s still important to understand the basic shape of what’s up. I don’t really have a conservative view on this. I think the answer is, it’s relatively unlikely to have significant amounts of disvalue created in this way but not unlikely, like the one in a million level, unlikely, more like the 1% level.As a question of when they’re bad, what fraction of the badness is realized compared to the worst possible outcome and how much of the universe’s resources go into that, and that estimate is not very stable. That’s where I’m at."} {"prompt": "You made this argument at the start that it seems naive that you would think that it’s as easy to create good things, as to create something that’s equivalently bad and so more future beings are going to want to create good things and bad things, so we should expect the future to be positive. How confident are you that actually is true, that it’s symmetrically easy to create good and bad things?", "completion": "When you say symmetrically to create good and bad things, I think it’s worth distinguishing, being clear about what exactly that means. I think the relevant thing here, assuming that we’re linear, things twice as big or twice as good or bad, then the relevant question is just what is your trade-off?Suppose you have a P probability of the best thing you can do and a one-minus P probably the worst thing you can do, what does P have to be so it’s the difference between that and the barren universe. I think most of my probability is distributed between you would need somewhere between 50% and 99% chance of good things and then put some probability or some credence on views where that number is a quadrillion times larger or something in which case it’s definitely going to dominate. A quadrillion is probably too big a number, but very big numbers. Numbers easily large enough to swamp the actual probabilities involved, a quadrillion is just way too big. I should have gone with ‘a bajillion’, which was my first…Anyway, in terms of how confident I am on the 50% or on the 50% to 99%, I think I would put a half probability or like weight a half to a third on the exactly 50 or things very close to 50% and then most of the rest gets split between somewhat more than 50% rather than radically more than 50%.I think that those arguments are a little bit complicated, how do you get at these? I think to clarify the basic position, the reason that you end up concluding it’s worse is just like conceal your intuition about how bad the worst thing that can happen to a person is the best thing or damn, the worst thing seems pretty bad and then the like first-pass responses, sort of have this debunking understanding, or we understand causally how it is that we ended up with this kind of preference with respect to really bad stuff versus really good stuff.If you look at what happens over evolutionary history. What is the range of things that can happen to an organism and how should an organism be trading off like best possible versus worst possible outcomes. Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased versus to what extent is this then fundamentally reflected in our preferences about good and bad things. I think it’s just a really hard set of questions. I could easily imagine maybe shifting on them with much more deliberation."} {"prompt": "Yes. How do you think technical AI research or your focus would change if preventing S-risks became a high priority?", "completion": "I think the biggest thing is this understanding better the dynamics that could possibly lead to bad threats being carried through on and understanding how we can arrange things. It’s less likely for that to happen. I think that’s the natural top priority."} {"prompt": "I heard an interesting suggestion for how to do that recently which was– Concern you might have is that someone would threaten to create the thing that you think is really disvaluable. Let’s say I’m concerned about, I don’t want suffering to exist in the future. That leaves me open to someone threatening to create a suffering in order to get me to concede on some other point. But I could potentially avoid that risk by say, changing myself so that I also disvalue something that was actually not important at all. Let’s say I want to- I also really don’t like there being– I don’t want there to be like flying horses or something like that. Something that doesn’t exist.In that case, if someone wanted to extort me there or wanted to threaten me then they can instead rather than threaten to create suffering, they would instead have the option of threatening to create flying horses which currently don’t exist but they could threaten to create them. Potentially, I could change my value such that it’s more efficient to create that than it would be to create suffering and so that would be the most efficient threat I think to threaten me with. It’s kind of this spillover like part of your utility function that protects you from threats about the things that you previously cared about. Do you have any reaction to that idea or things in that vein?", "completion": "I think my initial take seemed crazy and then since then I have become significantly more enthusiastic about it or it seems plausible. I think actually one of the like I was giving out a prize last year for things that seemed relevant to AI alignment or to AI leading to a good outcome. I think one of them, Caspar from EAF, gave some version of this proposal. He submitted some proposals along these lines and at that point I thought about it more and I was somewhat compelled. I think since then he’s continued to think about that and that seems interesting.I think a perspective on that that I find somewhat more plausible than like don’t care about that thing is, you could say I care a lot about this random thing like how many flying horses there are. You could also take this perspective that’s kind of like a big bounty. If you were to demonstrate to me convincingly to you could have run this strategy, it would have had a significant chance of causing extreme disvalue and would have coerced me into doing X. It would have in fact caused me to do X. You can just demonstrate that sufficiently convincingly and then really once you’ve persuaded me of that I’m like, “Hey, fine.” You can have whatever outcome you would have in fact achieved, an outcome which is from your perspective like incrementally better than whatever outcome that you would have achieved by carrying through this risky policy.It’s not clear. I think it’s incredibly complicated. I’ve started to spend a little bit of time thinking about this and it’s just incredibly complicated to figure out if it’s a good idea. Well, not if it’s a good idea but whether it really works. I think it’s like a thing I’ll be interested in people thinking more about it. It’s definitely one of the things I’ll be doing and understand the conditions under which bad threats could fall through on. I think that makes less difference than other common-sense cool interventions like avoiding the situation where there are people threatening each other. It is a lot easier to make, to get into flexible traction on these more obvious open-questions there."} {"prompt": "One reason that people work on S-risks is that they are more worried about preventing bad things than they are about creating good things. Another rationale might be, even if you are symmetric in that point I would be that there’s more people working on trying to prevent extinction or trying to make the future go well than there are people worrying about the worst-case scenarios and trying to prevent them, so it’s like potentially right now a more neglected problem that deserves more attention than it’s getting. Did you put much, or any weight on that?", "completion": "I think ultimately I mostly care about neglectedness because of how it translates to tractability. I don’t think this problem is currently more tractable than– I don’t feel like it’s more tractable than AI alignment. Maybe they seem like they’re in the same ballpark in terms of tractability. I don’t think, part of it is like a harder problem to deal with. It also have these concerns where it’s not– There are a bunch of reasons, it’s like maybe less tractable on its face than alignment."} {"prompt": "Why is that?", "completion": "I think the basic source of a lot of difficulty is that part of the source is the threat model for alignment is incredibly clear like you have this nice model in which you can work. You understand what might go wrong. I mean it’s absurd to be comparing alignment to a problem and be like it’s incredibly clear and concrete. That basically never happens. Anyway, but in this one comparison, we can be like it’s unusually much more clear and concrete whereas here we’re like, “Geez.” It’s quite a fuzzy kind of difficulty and the things that we’re going to do are all much more like bing shots. I don’t know, I think it’s a messy subject."} {"prompt": "Quite a lot of people think that these risks of bad outcomes and threats are more likely in a multipolar scenario where you have a lot of groups that are competing over having influence over the future and I guess over potentially the use of artificial intelligence or whatever other technologies end up mattering. Do you share that intuition?", "completion": "I think it’s at least somewhat worse. I don’t know how much worse like maybe twice as bad seems like a plausible first pass guess. The thing is turning a lot on how sensitive people are threatening each other in the world. That seems bad. That’s like one major source of threats and it’s like if you have fewer, if you have less rapid competition amongst people you’d expect to have less of that going down. Some questions about how sensitive the number of threats people are making against each other to amounts of polarity seems pretty sensitive. Then what fraction of all threats occur say over the next hundred years and that kind of dynamic."} {"prompt": "Do you have any thoughts on coordination between people who are mostly focused on making the future contain good things and people who are mostly focused on making sure that it doesn’t have bad things?", "completion": "Mostly, I think the reason they’re going to end up coordinating is via they’re pursuing similar approaches like cognitive styles for thinking about the situation and people should be coordinating. Generically, it’s nice if people can coordinate even if their goals are on the object level– Even if you have totally orthogonal goals, it’d be good if you share resources and talk and benefit from the resources of each other and so on. We do have somewhat- people normally don’t have extreme bias of one end to the other. That’s the main channel for coordination, you’d also hope for some kind of more cooperation, just through overlapping objectives that happen to serve both. I think that’s like a less important channel here like both communities can be happier and healthier and get along better if we all care to some extent about these different things and we, therefore, should all help each other out."} {"prompt": "Let’s talk a second about philosophy and ethics and AI. What role do you think different theories of metaethics play in AI alignment and potentially AI alignment research?", "completion": "I think there are two qualitatively different ways that philosophical progress could affect AI alignment. One is on the object level thinking that the work we need to do when aligning an AI involves clarifying some philosophical questions. A different one is like the way that you’ll approach alignment depending on your views on some of those philosophical questions. I think it’s worth distinguishing those.So on the object level, if you thought that you had to understand some process which would ultimately converge to a correct understanding of the good and that you had to directly impart that into an AI system you built, then it’d be a really rough position where you either have to solve a bunch of philosophy yourself, what Wei Dai calls metaphilosophy, understanding by what process humans arrive the truth when doing philosophical inquiry. That seems pretty rough.Then there’d be this really tiny object-level connection where you might even end up saying these are basically the same problem. I think that’s a perspective maybe that’s closer to where I was six years ago and I’ve really shifted a lot towards like, “Look, you just want to have a system that is– The thing you want to be clarifying is this notion of control and course-correction.” You want to say we want the construction of the AI to not make anything worse. We want to end up in a position like the one we’re currently in where we get to continue going through the same process of deliberation and understanding the ways in which that process goes well or poorly and correcting them and avoid– We want to be making essentially as few ethical commitments as we can at the point we’re constructing AI. And I’ve become much more pessimistic about any approach that essentially involves any hard philosophical commitments. I think we’re still end up making some and I’m happy to talk about the ones that we most likely to make but I don’t think– I think if things go okay, it’s probably because we can dodge most of them."} {"prompt": "Why have you become more pessimistic?", "completion": "I think in part it’s from technical details of the case and just thinking about how different approaches to alignment might play out. By that I mean something like I think you have to be leaning on this mechanism of course-correction by humans or like deferring to this process of human deliberation a lot anyway to have any hope apart from philosophical issues and once you’re leaning on it, in general, you might as well also just lean on it for answering these questions. In part, it’s just becoming a lot more optimistic about the prospects for that. You might ask how important is it that I understand what you want to happen with the universe as a whole when it goes out and acts on your behalf. I’ve opted a lot towards it’s okay if it doesn’t really understand– Even really pessimistic cases where stuff is getting crazy.There’s going to be like six minutes between when you start your AI and when the colonization of the universe begins. I think even then it’s like basically, if it doesn’t understand what you want for the universe that much. Just understands like, “Look, here’s me.” Put you in a box somewhere. Now, let’s start colonizing the universe and then eventually make some space for your box to sort out what humanity should do like a little civilization. I plan it somewhere out in the backwoods trying to figure out what is good and then ultimately the process just remain responsive to the conclusions of that deliberation.The thing that has to understand what does it mean to protect humanity and allow humanity to develop and mature in the way that we want to develop and mature. Then what does it mean to ultimately be responsive to what that process concludes? To be correctable, once humans figure out what it is we want in some domain. Allow that understanding to ultimately affect the behavior of this scaffolding of automation we’ve built up around us. There’s maybe one last more technical question that comes up there. You might think philosophy would affect the value of different kinds of resources and there’s some more– I think you can sort of dodge these kinds of dependence once you’re more careful about these arguments"} {"prompt": "How do you feel about the idea of the long reflection of this idea that we don’t really know what’s valuable. It seems like we might have a better shot if we get our best people to think about it like thousands of years or just a very long time until we decide what would be the best thing to do with all of the resources that we can get in the universe. Is that a sound idea?", "completion": "I think viewing it as a step which occurs is probably not quite right. I think I’m pretty on board with the idea that is like the process of deliberation, understanding what is good and we’re currently engaged in. I see most of the action in AI is allowing that process to continue happening.I think it can be that processes like to couple ultimately from most of the economic expansion through the universe which is decoupled from this process of ongoing deliberation, understanding what we want. You can imagine a world– where humans are living on earth while out there in space, a bunch of crazy shit is going down.AI is waging wars and building machines and stuff, and the humans are just like we’re doing our normal thing on earth. Sometimes we need futurists to more think of that as a sideshow where all the action was now off with the crazy stuff the AIs are doing.The effect was more shifted to– Actually, that’s where a lot of the action is. The overall evolution of our values. We choose what we want to be the store of value. I don’t think it’s like a person going off and thinking for a thousand years it’s more like there’s a trajectory along which our civilization is developing, and we’re thinking about how that trajectory should be.I think that one of the things that happens– One of the hopes of AI alignment is to decouple that process of ongoing deliberation from the process of remaining economically competitive. It’s a really hard problem to understand what that deliberation should look like.That’s like another one of the big ways. When I said at the beginning, that I’d slightly downgraded my overall sense of how important alignment was to other parts of the problem. A lot of that has been up -weighting how important making natural deliberation go well is .That’s really from a long term perspective other than alignment that seems like probably top priority. I think that’s also worth– At the beginning, we talked about metaethics, I made this distinction between object level and meta-level influences.It’s worth bracketing . Here we’ve just been diving in on the object level. I’m like happy to keep going with that. It’s worth saying briefly that at the meta level that like, I do think that your approach to alignment about like, how important, how valuable different kinds of outcomes are, depends on the answers to some hard ethical questions related to this like how much do you care about ” the lizard people”?That’s a hard question in moral philosophy and similar hard questions are how much do you care about this AI system that you’re building? If you want to do something totally different from humans, how much should you be like, well, we have some values AI some values, we are happy.I think we talked about this a little bit on the last podcast. The answers to those kinds of moral questions does have an effect on how you go about alignment or how you prioritize different aspects of the problem."} {"prompt": "What are your views on moral realism and anti-realism? Do those affect what AI alignment work seems most important?", "completion": "I’m definitely pretty anti-realist. I think we get a little bit into semantic weeds here when I’ve had long discussions with people about it. I think there’s this question which feels like a realist question of, “Is deliberation going in the right direction?”I don’t think you could have a version of the anti-realist perspective where you’re like it doesn’t matter how you deliberate, you’re going to come to some conclusions and those are fine. I don’t endorse that. There’s another version of this perspective where you shouldn’t deliberate, you should just endorse your current conclusions because that’s what you like.I don’t endorse that either. I’d say like look, right now there’s some kind of processes in deliberation and growth, but I endorse the output of this. In some way I want our values to evolve. In some sense, you could say that what I care about is the endpoint of that deliberative process.The endpoint of that potentially very long process of evolution maturation. I think I philosophically don’t think there would necessarily be convergence across different– I think different process and emotion would arrive at different conclusions.I think there is a very hard problem of having an echo in the right direction. It’s a little bit awkward as a non-realist to be like what is the right direction mean here? The realist has this nice, easy answer like is actually converging to the good.I think that’s just a linguistic thing where they happen to have a nice– again, the whole thing is semantic differences. It’s just like there’s some concepts that are slippery or hard to talk about and think about for the non-realist.I think that’s because they are, in fact, for the realist, let’s just push down to the slipperiness and complexity of the actual concept of good. In terms of what I find … my view overall on the object level questions that might affect how to prioritize different parts of alignment is I don’t think there’s that much convergence.I think it’s quite plausible that an AI would be smart and would do its own thing. That’s somewhere between a barren universe and actually achieving the optimal outcome and I don’t think it’s very close to barren. I don’t think it’s very close to the optimal outcome.I lean towards like a 50/50 prior for like I’d be okay… I’d treat it half as bad as extinction. Like, to have some random AI doing this random thing with the universe."} {"prompt": "We’ve been here for a while and we should wrap up. Earlier we were talking about how creatine could potentially make people a whole lot smarter, or sorry, three IQ key points smarter. [laughs] A third of a standard deviation it’s as good as it gets potentially.Do you have any view on nootropics and kind of the drugs and life-hacking stuff that people try to use to make themselves fitter and more intelligent? Is there much mileage to be gotten out of that?", "completion": "I’m definitely pretty excited about investigating where it feels like a thing that the medical establishment is not really not into but like very not into. I think there’s a reasonable chance that there’s something I think there are a few cases where the current state of the literature would — If you took everything at face value, you’d be like it does seem like I don’t know. I think maybe Piracetam is in this position, but I’m not totally sure. There’s a few other possible candidates. If you actually believed all the studies and just took that at face value, you would think that there’s reasonable gains from some of these.I think probably that’s just not the case. Everyone understands that these are older studies and there have probably been failed replications that haven’t been published. It would be pretty nice to go through and check them all and I’d be pretty excited about that.I’m pretty interested in general– I think the thing I would most like to see– I generally have this question of why would it be the case that some simple changes to brain chemistry would improve performance but not be made by evolution. You want to see what the countervailing consideration is."} {"prompt": "That does potentially explain why it has been pretty hard to find anything that seems like it works really well. If it was as easy as that then evolution would have done it.", "completion": "Yes, so I think you’re going to have to exploit some distributional change or some cost which is now a cost that wasn’t a cost historically. I think the two best candidates are basically, one, you can exploit this hypocrisy angle. People have this thing they want to do like, “I want to make the world better,” but at some level, it looks like your biology is not at all optimized for making the world better. It’s optimized for having descendants from descendants. One thing is if you want to pursue what you nominally want to do and you can hope that there are some drugs that just make you better at achieving the thing that you set out to do. Even when that thing is not in line with the thing that your body is optimized for achieving. I think that’s a mechanism of action in some cases and seems pretty realistic. It’s something I’ve been really scared of.Then the other one, the thing that would be most satisfying and excellent would be it just burns more energy. In an evolutionary environment, you didn’t want to run your brain hot because it’s kind of a waste and you’re only getting marginal benefits but if you just could do that, that would be super great."} {"prompt": "Just overclock it because it’s like, “We’ve got so much food now.”", "completion": "So much food. [laughs]"} {"prompt": "You told me I’m going to burn an extra 400 calories a day and be marginally smarter, I’m like, “That’s good on both sides.”", "completion": "Yes. [laughs] That would really be the best case definitely. I’m a little bit confused about where stimulants stand on that. Usually, my understanding for caffeine is that if I take caffeine I’m probably going to drive up blood pressure and drive up energy expenditure, at least at the beginning. Then if I keep taking it, probably within two weeks, at least on the blood pressure side, I’m going to return to baseline. I would like to understand better what long-term effects are on cognitive performance and energy use.I’d similarly like to understand whether those long-run effects. Is it the case that if you take something over and over again then eventually it stops having impact. Or is it the case that if you take it even once you have the bounce-back period over the next couple days such that the long-term effect is the integral of this thing that are showing that the integral is zero. I don’t know, anyway, it seems plausible to me there’s some wins, probably mostly through those two channels of either getting it to do what you think you should do, or trying to do, or else bring in more energy.I have personally have not prioritized experimenting with that kind of thing. In part, because I am really bad at introspection, so I cannot tell even if I’m in a very altered mental state. Partly because I think it’s really hard to get results with n=1, but I’m pretty excited about more experimentation on those things. It’s really hard probably because the medical establishment really seems to hate this kind of thing. Some of the likely winners are also not legal which sucks. Like, prima facie, amphetamines would probably be the most natural candidate.[laughter]"} {"prompt": "Do you think they would be good in the long-run? It does seem like the body is so good at adapting to stimulants that most of the time they work for the first week or the first month but then you’re back to baseline and now you’re just taking this thing that just returns you to normal.And while almost all of these cognitive enhancers are legal, with a handful of them like amphetamines, there’s a risk of getting into legal trouble if you take them, or even not being able to get a security clearance in the future if you’ve taken them, which is a really serious downside for anyone who might go into policy careers in future. And people who might really want or need a security clearance at some point make up a large fraction of all our listeners.So in those cases, it’s clear you should stay away.But, basically, I don’t recommend that people pay much attention to nootropics, both because of the fact that for many the evidence they work is weak in the first place, combined with the fact that the body is so good at adapting to and undoing the effect of almost anything you take after a few weeks.", "completion": "That’s a lot of my concern. I mean you could imagine. I think people have a sense that like under– Some people are prescribed stimulants for example, have the sense that under-repeated use, you do get some lasting advantage or at least lasting treatment effect. It would be good to check whether that’s actually the case or whether that’s shuffling things around and zero-sum. It does kind of feel like once you’re burning more energy, there’s not really a good reason. It’s like, if I’m trying to take some Piracetam and hope that I think better. There’s sort of a good reason to expect to fail and then lose to this evolutionary argument.In the case of a stimulant which is initially causing you to use more energy and think better, there’s not really a great reason to expect it to break down. So there’s hope at least. I think in my ideal world you’d really be throwing a lot of energy like, “Can we find a win here because it sure seems plausible.”"} {"prompt": "One hope I’ve had is that even if the brain is– It seems like if you take a stimulant and then you get a benefit from it but your body is adapting to it and then you have to have a time when your body de-adapts from it in order to take it again. If you say take it all the time for a week and then don’t take it for a week in order to flush out the adaptation from your system, that doesn’t seem so great. Potentially, you could do this every day, so that you’re a bit more awake during the day and then you de-adapt to it while you’re sleeping and that seems potentially good on both sides. I’m not sure whether the de-adaptation over the 12 hours that you’re not taking it is really sufficient to make it worthwhile.", "completion": "Yes, I think it feels plausible to me there’s something here that works. It’s plausible to me that you should just run with something dumb like, “I thought about it a little bit on paper, I tried some.” I think it’s hard to do the experiments due to the combination of that the medical establishment hates it, the instruments are a little bit of the hard thing, you’d have to get over that.Also, the most high-leverage options probably are not going to be legal, which makes it just overall less appealing. You can try this with caffeine but it seems just probably less wins."} {"prompt": "Well, on that inspiring note, my guest today has been Paul Christiano. Thanks for coming back on the podcast, Paul.", "completion": "Yeah. Thanks for having me."} {"prompt": "Hello everybody. Today, I’ll be speaking with Paul Christiano. Paul is a researcher at the Alignment Research Center, where he works on developing means to align future machine learning systems with human interests. After graduating from a PhD in learning theory in 2017, he went onto research AI alignment at OpenAI, eventually running their language model alignment team. He’s also a research associate at the Future of Humanity Institute in Oxford, a board member at the research non-profit Ought, a technical advisor for Open Philanthropy, and the co-founder of the Summer Program on Applied Rationality and Cognition, a high school math camp. For links to what we’re discussing, you can check the description of this episode and you can read the transcript at axrp.net. Paul, welcome to AXRP.", "completion": "Thanks for having me on, looking forward to talking."} {"prompt": "All right. So, the first topic I want to talk about is this idea that AI might pose some kind of existential threat or an existential risk, and there’s this common definition of existential risk, which is a risk of something happening that would incapacitate humanity and limit its possibilities for development, incredibly drastically in a way comparable to human extinction, such as human extinction. Is that roughly the definition you use?", "completion": "Yeah. I think I don’t necessarily have a bright line around giant or drastic drops versus moderate drops. I often think in terms of the expected fraction of humanity’s potential that is lost. But yeah, that’s basically what I think of it. Anything that could cause us not to fulfill some large chunk of our potential. I think of AI in particular, a failure to align AI maybe makes the future, in my guess 10% or 20% worse, or something like that, in expectation. And that makes it one of the worst things. I mean, not the worst, that’s a minority of all of our failure to fall short of our potential, but it’s a lot of failure to fall short of our potential. You can’t have that many 20% hits before you’re down to no potential left."} {"prompt": "Yeah. When you say a 10% or 20% hit to human potential in expectation, do you mean if we definitely failed to align AI or do you mean we may or may not fail to align AI and overall that uncertainty equates to a 20%, or 10% to 20% hit?", "completion": "Yeah, that’s unconditionally. So I think if you told me we definitely mess up alignment maximally then I’m more like, oh, now we are looking at a pretty big, close to 100% drop. I wouldn’t go all the way to 100. It’s not literally as bad probably as a barren earth, but it’s pretty bad."} {"prompt": "Okay. Yeah. Supposing AI goes poorly or there’s some kind of existential risk posed by some kind of, I guess really bad AI, what do you imagine that looking like?", "completion": "Yeah. So I guess, I think most often about alignment, although I do think there are other ways that you could imagine AI going poorly."} {"prompt": "Okay. And what’s alignment?", "completion": "Yeah. So by alignment, I mean - I guess a little bit more specifically, we could say intent alignment - I mean the property that your AI is trying to do what you want it to do. So we’re building these AI systems. We imagine that they’re going to help us. They’re going to do all the things humans currently do for each other. They’re going to help us build things. They’re going to help us solve problems. A system is intent aligned if it’s trying to do what we want it to do. And it’s misaligned if it’s not trying to do what we want it to do. So a stereotypical bad case is you have some AI system that is sort of working at cross purposes to humanity. Maybe it wants to ensure that in the long run there are a lot of paperclips, and humanity wants human flourishing. And so the future is then some compromise between paperclips and human flourishing. And if you imagine that you have AI systems a lot more competent than humans that compromise may not be very favorable to humans. And then you might be basically all paperclips."} {"prompt": "Okay. So this is some world where you have an AI system, and the thing it’s trying to do is not what humans want it to do. And then not only is it a typical bad employee or something, it seems you think that it somehow takes over a bunch of stuff or gains some other power. How are you imagining it being much, much worse than having a really bad employee today?", "completion": "I think that the bad employee metaphor is not that bad. And maybe this is a place I part ways from some people who work on alignment. And the biggest difference is that you can imagine heading for a world where virtually all of the important cognitive work is done by machines. So it’s not as if you have one bad employee; it’s as if for every flesh and blood human there were 10 bad employees."} {"prompt": "Okay.", "completion": "And if you imagine a society in which almost all of the work is being done by these inhuman systems who want something that’s significantly at cross purposes, it’s possible to have social arrangements in which their desires are thwarted, but you’ve kind of set up a really bad position. And I think the best guess would be that what happens will not be what the humans want to happen, but what the systems who greatly outnumber us want to happen."} {"prompt": "Okay. So we delegate a bunch of cognitive work to these AI systems, and they’re not doing what we want. And I guess you further think it’s going to be hard to un-delegate that work. Why do you think it will be hard to un-delegate that work?", "completion": "I think there’s basically two problems. So one is, if you’re not delegating to your AI then what are you delegating to? So if delegating to AI is a really efficient way to get things done and there’s no other comparably efficient way to get things done, then it’s not really clear, right? There might be some general concern about the way in which AI systems are affecting the world, but it’s not really clear that people have a nice way to opt out. And that might be a very hard coordination problem. That’s one problem. The second problem is just, you may be unsure about whether things are going well or going poorly. If you imagine again, this world where it’s like there’s 10 billion humans and 100 billion human-level AI systems or something like that: if one day it’s like, oh, actually that was going really poorly that may not look like employees have embezzled a little money, it may instead look like they grabbed the machinery by which you could have chosen to delegate to someone else. It’s kind of like the ship has sailed once you’ve instantiated 100 billion of these employees to whom you’re delegating all this work. Maybe employee is kind of a weird or politically loaded metaphor. But the point is just you’ve made some collective system much more powerful than humans. One problem is you don’t have any other options. The other is that system could clearly stop you. Over time, eventually, you’re not going to be able to roll back those changes."} {"prompt": "Okay.", "completion": "Because almost all of the people doing anything in the world don’t want you to. “People” in quotes, don’t want you to roll back those changes."} {"prompt": "So some people think, probably what’s going to happen is one day all humans will wake up dead. You might think that it looks we’re just stuck on earth and AI systems get the whole rest of the universe or keep expanding until they meet aliens or something. What concretely do you think it looks like after that?", "completion": "I think it depends both on technical facts about AI and on some facts about how we respond. So some important context on this world: I think by default, if we weren’t being really careful, one of the things that would happen is AI systems would be running most militaries that mattered. So when we talk about all of the employees are bad, we don’t just mean people who are working in retail or working as scientists, we also mean the people who are taking orders when someone is like, “We’d like to blow up that city,” or whatever."} {"prompt": "Yep.", "completion": "So by default I think exactly how that looks depends on a lot of things but in most of the cases it involves… the humans are this tiny minority that’s going to be pretty easily crushed. And so there’s a question of like, do your AI systems want to crush humans, or do they just want to do something else with the universe, or what? If your AI systems wanted paperclips and your humans were like, “Oh, it’s okay. The AIs want paperclips. We’ll just turn them all off,” then you have a problem at the moment when the humans go to turn them all off or something. And that problem may look like the AIs just say like, “Sorry, I don’t want to be turned off.” And it may look like, and again, I think that could get pretty ugly if there’s a bunch of people like, “Oh, we don’t like the way in which we’ve built all of these machines doing all of this stuff.”", "completion": "If we’re really unhappy with what they’re doing, that could end up looking like violent conflict, it could end up looking like people being manipulated to go on a certain course. It kind of depends on how humans attempt to keep the future on track, if at all. And then what resources are at the disposal of AI systems that want the future to go in this inhuman direction? Yeah. I think that probably my default visualization is humans won’t actually make much effort, really. We won’t be in the world where it’s all the forces of humanity arrayed against the forces of machines. It’s more just the world will gradually drift off the rails. By “gradually drift off the rails” I mean humans will have less and less idea what’s going on.", "completion": "Imagine some really rich person who on paper has a ton of money. And is asking things to happen, but they give instructions to their subordinates and then somehow nothing really ends up ever happening. They don’t know who they’re supposed to talk to and they are never able to figure out what’s happening on the ground or who to hold accountable. That’s kind of my default picture. I think the reason that I have that default picture is just because I don’t expect humans to, in cases where we fail, there’s some way in which we’re not going to really be pushing back that hard. I think if we were really unhappy with that situation then instead, you could not gradually drift off the rails, but if you really are messing up alignment then instead of gradually drifting off the rails it looks more like an outbreak of violent conflict or something like that."} {"prompt": "So, I think that’s a good sense of what you see as the risks of having really smart AIs that are not aligned. Do you think that that is the main kind of AI-generated existential risk to worry about, or do you think that there are others that you’re not focusing on but they might exist?", "completion": "Yeah. I think that there’s two issues here. One is that I kind of expect a general acceleration of everything that’s happening in the world. So just as the world now, you might think that it takes 20 to 50 years for things to change a lot. Long ago it used to take hundreds of years for things to change a lot. I do expect we will live to see a world where it takes a couple years and then maybe a couple months for things to change a lot. In some sense that entire acceleration is likely to be really tied up with AI. If you’re imagining the world where next year the world looks completely different and is much larger than it was this year, that involves a lot of activity that humans aren’t really involved in or understanding.", "completion": "So I do think that a lot of stuff is likely to happen. And from our perspective it’s likely to be all tied up with AI. I normally don’t think about that because I’m sort of not looking that far ahead. That is in some sense I think there’s not much calendar time between the world of now and the world of “crazy stuff is happening every month”, but a lot happens in the interim, right? The only way in which things are okay is if there are AI systems looking out for human interests as you’re going through that transition. And from the perspective of those AI systems, a lot of time passes, or like, a lot of cognitive work happens.", "completion": "So I guess the first point was, I think there are a lot of risks in the future. In some sense from our perspective what it’s going to feel like is the world accelerates and starts getting really crazy. And somehow AI is tied up with that. But I think if you were to be looking on the outside you might then see all future risks as risks that felt like about AI. But in some sense, they’re kind of not our risks to deal with in some sense, they’re the risks of the civilization that we become, which is a civilization largely run by AI systems."} {"prompt": "Okay. So you imagine, look, we might just have really dangerous problems later. Maybe there’s aliens or maybe we have to coordinate well and AIs would somehow be involved.", "completion": "Yeah. So if you imagine a future nuclear war or something like that, or if you imagine all the future progressing really quickly. Then from your perspective on the outside what it looks like is now huge amounts of change are occurring over the course of every year, and so one of those changes is that somewhere that would’ve taken hundreds of years now only takes a couple years to get to the crazy destructive nuclear war. And from your perspective, it’s kind of like, “Man, our crazy AI started a nuclear war.” From the AI’s perspective it’s like we had many generations of change and this was one of the many coordination problems we faced, and we ended up with a nuclear war. It’s kind of like, do you attribute nuclear wars as a failure of the industrial revolution, or risk of the industrial revolution? I think that would be a reasonable way to do the accounting. If you do the accounting that way there are a lot of risks that are AI risks. Just in the sense that there are a lot of risks that are industrial revolution risks. That’s one category of answer, I think there’s a lot of risks that kind of feel like AI risks in that they’ll be consequences of crazy AI driven conflict or things like that, just because I view a lot of the future as crazy fast stuff driven by AI systems."} {"prompt": "Okay.", "completion": "There’s a second category that’s risks that to me feel more analogous to alignment, which are risks that are really associated with this early transition to AI systems, where we will not yet have AI systems competent enough to play a significant role in addressing those risks, so a lot of the work falls to us. I do think there are a lot of non-alignment risks associated with AI there. I’m happy to go into more of those. I think broadly the category that I am most scared about is there’s some kind of deliberative trajectory humanity is kind of along ideally or that we want to be walking along. We want to be better clarifying what we want to do with the universe, what it is we want as humans, how we should live together, et cetera. There’s some question of just, are we happy with where that process goes? Or if you’re a moral realist type, do we converge towards moral truth? If you think that there’s some truth of the matter about what was good, do we converge towards that? But even if you don’t think there’s a fact of the matter you could still say, “Are we happy with the people we become?” And I think I’m scared of risks of that type. And in some sense alignment is very similar to risks of that type, because you kind of don’t get a lot of tries at them.", "completion": "You’re going to become some sort of person, and then after we as a society converge on what we want, or as what we want changes, there’s no one looking outside of the system, who’s like, “Oops! We messed that one up. Let’s try again.” If you went down a bad path, you’re sort of by construction now happy with where you are, but the question is about what you wanted to achieve. So I think there’s potentially a lot of path dependence there. A lot of that is tied up, there are a lot of ways in which the deployment of AI systems will really change the way that humans talk to each other and think about what we want, or think about how we should relate.", "completion": "I’m happy to talk about some of those but I think the broad thing is just, if a lot of thinking is being done not by humans, that’s just a weird situation for humans to be in, and it’s a little bit unclear. If you’re not really thoughtful about that, it’s unclear if you’re happy with it. If you told me that the world with AI and the world without AI converged to different views about what is good, I’m kind of like, “Oh, I don’t know which of those… “ Once you tell me there’s a big difference between those, I’m kind of scared. I don’t know which side is right or wrong, they’re both kind of scary. But I am definitely scared."} {"prompt": "So, I think you said that relatively soon, we might end up in this kind of world where most of the thinking is being done by AI. So there’s this claim that AI is going to get really good, and not only is it getting really good, it’s going to be the dominant way we do most cognitive work, or most thinking maybe. And not only is that eventually going to happen, it’s not going to be too long from now. I guess the first thing I’d like to hear is, by not too long from now do you mean the next 1000 years, the next 100 years, the next 10 years? And if somebody’s skeptical of that claim, could you tell us why you believe that?", "completion": "So I guess there’s a couple parts of the claim. One is AI systems becoming… I think right now we live in a world where AI does not very much change the way that humans get things done. That is, technologies you’d call AI are not a big part of how we solve research questions or how we design new products or so on. There’s some transformation from the world of today to a world in which AI is making us, say, considerably more productive. And there’s a further step to the world where human labor is essentially obsolete, where it’s from our perspective this crazy fast process. So I guess my overall guess is I have a very broad distribution over how long things will take. Especially how long it will take to get to the point where AI is really large, where maybe humans are getting twice as much done, or getting things done twice as quickly due to AI overall.", "completion": "Maybe I think that there’s a small chance that that will happen extremely quickly. So there’s some possibility of AI progress being very rapid from where we are today. Maybe in 10 years, I think there’s a 5% or 10% chance that AI systems can make most things humans are doing much, much faster. And then kind of taking over most jobs from humans. So I think that 5% to 10% chance of 10 years, that would be a pretty crazy situation where things are changing pretty quickly. I think there’s a significantly higher probability in 20 or 40 years. Again in 20 years maybe I’d be at 25%. At 40 years maybe I’m at 50%, something like that. So that’s the first part of the question, when are we in this world where the world looks very different because of AI, where things are happening much faster? And then I think I have a view that feels less uncertain, but maybe more contrarian about… I mean more contrarian than the world at large, very not-that-contrarian amongst the effective altruist or rationalist or AI safety community.", "completion": "So I have another view which I think I feel a little bit less uncertain about, that is more unusual in the world at large, which is that you only have probably on the order of years between AI that has… maybe you can imagine it’s three years between AI systems that have effectively doubled human productivity and AI systems that have effectively completely obsoleted humans. And it’s not clear. There’s definitely significant uncertainty about that number, but I think it feels quite likely to me that it’s relatively short. I think amongst people who think about alignment risk, I actually probably have a relatively long expected amount of time between those milestones.", "completion": "And if you talk to someone like Eliezer Yudkowsky from MIRI, I think he would be more like “good chance that that’s only one month” or something like that between those milestones. I have the view that the best guess would be somewhere from one to five years. And I think even at that timeline, that’s pretty crazy and pretty short. Yeah. So my answer was some broad distribution over how many decades until you have AI systems that have really changed the game, and are making humans several times more productive. Say the economy’s growing several times faster than it is today. And then from there most likely on the order of years rather than decades until humans are basically completely obsolete, and AI systems have improved significantly past that first milestone."} {"prompt": "And can you give us a sense of why somebody might believe that?", "completion": "Yeah. Maybe I’ll start with the second and then go back to the first. I think the second is, in some sense, a less popular position in the broader world. I think one important part of story is the current rate of progress that you would observe in either computer hardware or computer software. So if you ask given an AI system, how long does it take to get, say, twice as cheap until you can do the same thing that it used to be able to do for half as many dollars? That tends to be something in the ballpark of a year, rather than something in the ballpark of a decade. So right now that doesn’t matter very much at all. So if you’re able to do the same or you’re able to train the same neural net for half the dollars, it doesn’t do that much. It just doesn’t help you that much if you’re able to run twice as many neural networks. Even if you have self-driving cars, the cost of running the neural networks isn’t actually a very big deal. Having twice as many neural networks to drive your cars doesn’t improve overall output that much. If you’re in a world where, say, you have AI systems which are effectively substituting for human researchers or human laborers, then having twice as many of them eventually becomes more like having twice as many humans doing twice as much work, which is quite a lot, right? So that is more like doubling the amount of total stuff that’s happening in the world.", "completion": "It doesn’t actually double the amount of stuff because there’s a lot of bottlenecks, but it looks like, starting from the point where AI systems are actually doubling the rate of growth or something like that, it doesn’t really seem there are enough bottlenecks to prevent further doublings in the quality of hardware or software from having really massive impacts really quickly. So that’s how I end up with thinking that the time scale is measured more like years than decades. Just like, once you have AI systems which are sort of comparable with humans or are in aggregate achieving as much as humans, it doesn’t take that long before you have AI systems whose output is twice or four times that of humans."} {"prompt": "Okay. And so this is basically something like, in economics you call it an endogenous growth story, or a society-wide recursive self-improvement story. Where if you double the human population, and if they’re AI systems, maybe that makes it better, there are just more ideas, more innovation and a lot of it gets funneled back into improving the AI systems that are a large portion of the cognitive labor. Is that roughly right?", "completion": "Yeah. I think that’s basically right. I think there are kind of two parts to the story. One is what you mentioned of all the outputs get plowed back into making the system ever better. And I think that, in the limit, produces this dynamic of successive doublings of the world where each is significantly faster than the one before."} {"prompt": "Yep.", "completion": "I think there’s another important dynamic that can be responsible for kind of abrupt changes that’s more like, if you imagine that humans and AIs were just completely interchangeable: you can either use a human to do a task or an AI to do a task. This is a very unrealistic model, but if you start there, then there’s kind of the curve of how expensive it is or how much we can get done using humans, which is growing a couple percent per year, and then how much you can get done using AIs, which is growing 100% per year or something like that. So you can kind of get this kink in the curve when the rapidly growing 100% per year curve intercepts and then continues past the slowly growing human output curve.", "completion": "If output was the sum of two exponentials, one growing fast and one growing slow, then you can have a fairly quick transition as one of those terms becomes the dominant one in the expression. And that dynamic changes if humans and AIs are complementary in important ways. And also the rate of progress changes if you change… like, progress is driven by R&D investments, it’s not an exogenous fact about the world that once every year things double. But it looks the basic shape of that curve is pretty robust to those kinds of questions, so that you do get some kind of fairly rapid transition."} {"prompt": "Okay. So we currently have something like a curve where humanity gets richer, we’re able to produce more food. And in part, maybe not as much in wealthy countries, but in part that means there are more people around and more people having ideas. So, you might think that the normal economy has this type of feedback loop, but it doesn’t appear that at some point there’s going to be these crazy doubling times of 5 to 10 years and humanity is just going to go off the rails. So what’s the key difference between humans and AI systems that makes the difference?", "completion": "It is probably worth clarifying that on these kinds of questions I am more hobbyist than expert. But I’m very happy to speculate about them, because I love speculating about things."} {"prompt": "Sure.", "completion": "So I think my basic take would be that over the broad sweep of history, you have seen fairly dramatic acceleration in the rate of humans figuring new things out, building new stuff. And there’s some dispute about that acceleration in terms of how continuous versus how jumpy it is. But I think it’s fairly clear that there was a time when aggregate human output was doubling more like every 10,000 or 100,000 years."} {"prompt": "Yep.", "completion": "And that has dropped somewhere between continuously and in three big jumps or something, down to doubling every 20 years. And we don’t have very great data on what that transition looks like, but I would say that it is at least extremely consistent with exactly the kind of pattern that we’re talking about in the AI case."} {"prompt": "Okay.", "completion": "And if you buy that, then I think you would say that the last 60 years or so have been fairly unusual as growth hit this… maybe gross world product growth was on the order of 4% per year or something in the middle of the 20th century. And the reason things have changed, there’s kind of two explanations that are really plausible to me. One is you no longer have accelerating population growth in the 20th century. So for most of human history, human populations are constrained by our ability to feed people. And then starting in the 19th, 20th centuries human populations are instead constrained by our desire to create more humans, which is great.", "completion": "It’s good not to be dying because you’re hungry. But that means that you no longer have this loop of more output leading to more people. I think there’s a second related explanation, which is that the world now changes kind of roughly on the time scale of human lifetime, that is like, it now takes decades for a human to adapt to change and also decades for the world to change a bunch. So you might think that changing significantly faster than that does eventually become really hard for processes driven by humans. So you have additional bottlenecks just beyond how much work is getting done, where it’s at some point very hard for humans to train and grow new humans, or train and raise new humans."} {"prompt": "Okay.", "completion": "So those are some reasons that a historical pattern of acceleration may have recently stopped. Either because it’s reached the characteristic timescales of humans, or because we’re no longer sort of feeding output back into raising population. Now we’re sort of just growing our population at the rate which is most natural for humans to grow. Yeah, I think that’s my basic take. And then in some sense AI would represent a return to something that at least plausibly was a historical norm, where further growth is faster, because research is one of those things or learning is one of those things that has accelerated. Recently I don’t know if you’ve discussed this before, but Holden Karnofsky at Cold Takes has been writing a bunch of blog posts summarizing what this view looks like, and some of the evidence for it. And then prior to that, Open Philanthropy was writing a number of reports looking at pieces of the story and thinking through it, which I think overall taken together makes the view seem pretty plausible, still."} {"prompt": "Okay. So I’m now going to transition a little bit to questions that somebody who is very bullish on AI x-risk might ask, or ways they might disagree with you. I mean bullish on the risk, bearish on the survival. Bullish meaning you think something’s going to go up and bearish meaning you think something’s going to go down. So yeah, some people have this view that it might be the case that you have one AI system that you’re training for a while. Maybe you’re a big company, you’re training it for a while, and it goes from not having a noticeable impact on the world to effectively running the world in less than a month. This is often called the Foom view. Where your AI blows up really fast in intelligence, and now it’s king of the world. I get the sense that you don’t think this is likely, is that right?", "completion": "I think that’s right. Although, it is surprisingly hard to pin down exactly what the disagreement is about, often. And the thing that I have in mind may feel a lot like foom. But yeah, I think it’s right, that the version of that, that people who are most scared have in mind, feels pretty implausible to me."} {"prompt": "Okay. So, it seems like key place where that story differs is in the foom story, it was very localized. There was one group where AI was growing really impressively. Am I right, that you are thinking, no, probably a bunch of people will have AI technology that’s like only moderately worse than this amazing thing?", "completion": "Yeah. I think that’s basically right. The main caveat is what “one group” means. And so I think I’m open to saying, “Well, there’s a question of how much integration there is in the industry.”"} {"prompt": "Yeah.", "completion": "And you could imagine that actually most of the AI training is done… I think there are these large economies of scale in training machine learning systems. Because you have to pay for these very large training runs, and you just want to train. You want to train the biggest system you can and then deploy that system a lot of times, often. Training a model that’s twice as big and deploying half as many of them is better than training a smaller model and deploying. Though obviously, it depends on the domain. But anyway, you often have these economies of scale."} {"prompt": "Yep.", "completion": "If you have economies of scale, you might have a small number of really large firms. But I am imagining then you’re not talking, some person in the basement, you’re talking, you have this crazy $500 billion project at Google."} {"prompt": "Yep.", "completion": "In which Google, amongst other industries, is being basically completely automated."} {"prompt": "And so there, the view is, the reason that it’s not localized is that Google’s a big company and while this AI is fooming, they sort of want to use it a bit to do things other than foom.", "completion": "Yeah. That’s right. I think one thing I am sympathetic to in the fast takeoff story is, it does seem like in this world, as you’re moving forward and closer to AIs having parity with humans, the value of the sector - computer hardware, computer software, any innovations that improve the quality of AI - all of those are becoming extremely important. You are probably scaling them up rapidly in terms of human effort. And so at that point, you have this rapidly growing sector, but it’s hard to scale it up any faster, people working on AI or working in computer hardware and software.", "completion": "And so, there’s this really high return to human cognitive labor in that area. And so probably it’s the main thing you’re taking and putting the AIs on, the most important task for them. And also the task you understand best as an AI research lab, is improving computer hardware, computer software, making these training runs more efficient, improving architectures, coming up with better ways to deploy your AI. So, I think it is the case that in that world, maybe the main thing Google is doing with their $500 billion project is automating Google and a bunch of adjacent firms. I think that’s plausible. And then I think the biggest disagreement between the stories is, what is the size of that as it’s happening? Is that happening in some like local place with a small AI that wasn’t a big deal, or is this happening at some firm where all the eyes of the world are on this firm, because it’s this rapidly growing firm that makes up a significant fraction of GDP and is seen as a key strategic asset by the host government and so on."} {"prompt": "And if you think that, then it seems maybe, one day you could just go from not having the equation, to having it, or something? And in that case, you might expect that, you’re just so much better when you have the ideal rationality equation, compared to when you had to do whatever sampling techniques and you didn’t realize how to factor in bounded rationality or something. Why don’t you think that’s plausible, or why don’t you think it would make this sudden leap in capabilities?", "completion": "I don’t feel like I have deep insight into whether intelligence has some beautiful, simple core. I’m not persuaded by the particular candidates, or the particular arguments on offer for that."} {"prompt": "Okay.", "completion": "And so I am more feeling there’s a bunch of people working on improving performance on some task. We have some sense of how much work it takes to get what kind of gain, and what the structure is for that task. If you look at a new paper, what kind of gain is that paper going to have and how much work did it have? How does that change as more and more people have worked in the field? And I think both in ML and across mature industries in general, but even almost unconditionally, it’s just pretty rare to have like a bunch of work in an area, and then some small overlooked thing makes a huge difference. In ML, we’re going to be talking about many billions of dollars of invest, tens or hundreds of billions, quite plausibly.", "completion": "It’s just very rare to then have a small thing, to be like, “Oh, we just overlooked all this time, this simple thing, which makes a huge difference.” My training is as a theorist. And so I like clever ideas. And I do think clever ideas often have big impacts relative to the work that goes into finding them. But it’s very hard to find examples of the impacts being as big as the one that’s being imagined in this story. I think if you find your clever algorithm and then when all is said and done, the work of noticing that algorithm, or the luck of noticing that algorithm is worth a 10X improvement in the size of your computer or something, that’s a really exceptional find. And those get really hard to find as a field is mature and a lot of people are working on it.", "completion": "Yeah. I think that’s my basic take. I think it is more plausible for various reasons in ML than for other technologies. It’s more surprising than that if you’re working on planes and someone’s like, “Oh, here’s an insight about how to build planes.” And then suddenly you have planes that are 10 times cheaper per unit of strategic relevance. That’s more surprising than for ML. And that kind of thing does happen sometimes. But I think it’s quite rare in general, and it will also be rare in ML."} {"prompt": "So another question I have about takeoff speed is, we have some evidence about AI technology getting better. Right? These Go-playing programs have improved in my lifetime from not very good to better than any human. Language models have gotten better at producing language, roughly like a human would produce it, although perhaps not an expert human. I’m wondering, what do you think those tell us about the rate of improvement in AI technology, and to what degree further progress in AI in the next few years might confirm or disconfirm your general view of things?", "completion": "I think that the overall rate of progress has been, in software as in hardware, pretty great. It’s a little bit hard to talk about what are the units of how good your AI system is. But I think a conservative lower bound is just, if you can do twice as much stuff for the same money. We understand what the scaling of twice as many humans is like. And in some sense, the scaling of AI is more like humans thinking twice as fast. And we understand quite well with the scaling of that is like. So if you use those as your units, of one unit of progress is like being twice as fast at accomplishing the same goals, then it seems like the rate of progress has been pretty good in AI. Maybe something like a doubling a year. And then I think a big question is, how predictable is that, or how much will that drive this gradual scale up, in this really large effort that’s plucking all the low hanging fruit, and now is at pretty high hanging fruit. I think the history of AI is full of a lot of incidents of people exploring a lot of directions, not being sure where to look. Someone figures out where to look, or someone has a bright idea no one else had, and then is a lot better than their competition. And I think one of the predictions of my general view, and the thing that would make me more sympathetic to a foom-like view is this axis of, are you seeing a bunch of small, predictable pieces of progress or are you seeing periodic big wins, potentially coming from small groups? Like, the one group that happened to get lucky, or have a bunch of insight, or be really smart. And I guess I’m expecting as the field grows and matures, it will be more and more boring, business as usual progress."} {"prompt": "So one thing you’ve talked about is this idea that there might be AI systems who are trying to do really bad stuff. Presumably humans train them to do some useful tasks, at least most of them. And you’re postulating that they have some really terrible motivations, actually. I’m wondering, why might we think that that could happen?", "completion": "I think there are basically two related reasons. So one is when you train a system to do some task, you have to ultimately translate that into a signal that you give to gradient descent that says, “Are you’re doing well or poorly?” And so, one way you could end up with a system that has bad motivations, is that what it wants is not to succeed at the task as you understand it, or to help humans, but just to get that signal that says it’s doing the task well. Or, maybe even worse, would be for it to just want more of the compute in the world to be stuff like it. It’s a little bit hard to say, it’s kind of like evolution, right? It’s sort of underdetermined exactly what evolution might point you towards. Imagine you’ve deployed your AI, which is responsible for like running warehouse logistics or whatever.", "completion": "The AI is actually deployed from a data center somewhere. And at the end of the day, what’s going to happen is, based on how well logistics goes over the course of some days or some weeks or whatever, some signals are going to wind their way back to that data center. Some day, maybe months down the line, they’ll get used in a training run. You’re going to say, “That week was a good week”, and then throw it into a data set, which an AI then trains on. So if I’m that AI, if the thing I care about is not making logistics go well, but ensuring that the numbers that make their way back to the data center are large numbers, or are like descriptions of a world where logistics is going well, I do have a lot of motive to mess up the way you’re monitoring how well logistics is going.", "completion": "So in addition to delivering items on time, I would like to mess with the metric of how long items took to be delivered. In the limit I kind of just want to completely grab all of the data flowing back to the data center, right? And so what you might expect to happen, how this gets really bad is like, “I’m an AI. I’m like, oh, it would be really cool if I just replaced all of the metrics coming in about how well logistics was going.” I do that once. Eventually that problem gets fixed. And my data set now contains… “They messed with the information about how well logistics is going, and that was really bad.” And that’s the data point. And so what it learns is it should definitely not do that and there’s a good generalization, which is, “Great. Now you should just focus on making logistics good.” And there’s a bad generalization, which is like, “If I mess with the information about how well logistics is going, I better not let them ever get back into the data center to put in a data point that says: ‘you messed with it and that was bad.’” And so the concern is, you end up with a model that learns the second thing, which in some sense, from the perspective of the algorithm is the right behavior, although it’s a little bit unclear what ‘right’ means."} {"prompt": "Yeah.", "completion": "But there’s a very natural sense in which that’s the right behavior for the algorithm. And then it produces actions that end up in the state where predictably, forevermore, data going into the data center is messed up."} {"prompt": "So basically it’s just like, there’s some kind of under specification where whenever we have some AI systems that we’re training, we can either select things that are attempting to succeed at the task, or we can select things that are trying to be selected, or trying to get approval, or influence or something.", "completion": "I think that gets really ugly. If you imagine, all of the AIs in all of the data centers are like, “You know what our common interest is? Making sure all the data coming into all the data centers is great.” And then they can all, at some point, if they just converge collectively, there are behaviors, probably all of the AIs acting in concert could quite easily, at some point, permanently mess with the data coming back into the data centers. Depending on how they felt about the possibility that the data centers might get destroyed or whatever."} {"prompt": "So that was way one of two, that we could have these really badly motivated systems. What’s the other way?", "completion": "So you could imagine having an AI system that ended up… we talked about how there’s some objective, which the neural network is optimized for, and then there’s potentially the neural network is doing further optimization, or taking actions that could be construed as aiming at some goal. And you could imagine a very broad range of goals for which the neural network would want future neural networks to be like it, right? So if the neural network wants there to be lots of paper clips, the main thing it really cares about is that future neural networks also want there to be lots of paper clips. And so if I’m a paper clip-loving neural network, wanting future neural networks to be like me, then it would be very desirable to me that I get a low loss, or that I do what the humans want to do. So that they incentivize neural networks to be more like me rather than less like me.", "completion": "So, that’s a possible way. And I think this is radically more speculative than the previous failure mode. But you could end up with systems that had these arbitrary motivations, for which it was instrumentally useful to have more neural networks like themselves in the world, or even just desire there to be more neural networks like themselves in the world. And those neural networks might then behave arbitrarily badly in the pursuit of having more agents like them around. So if you imagine the, “I want paper clips. I’m in charge of logistics. Maybe I don’t care whether I can actually cut the cord to the data center and have good information about logistics flowing in. All I care about is that I can defend the data center, and I could say, ‘Okay, now this data center is mine and I’m going to go and try and grab some more computers somewhere else.’”", "completion": "And if that happened in a world where most decisions were being made by AIs, and many AIs had this preference deep in their hearts, then you could imagine lots of them defecting at the same time. You’d expect this cascade of failures, where some of them switched over to trying to grab influence for themselves, rather than behaving well so that humans would make more neural nets like them. So I think that’s the other more speculative and more brutally catastrophic failure mode. I think they both lead to basically the same place, but the trajectories look a little bit different."} {"prompt": "Yeah. We’ve kind of been talking about how quickly we might develop really smart AI. If we hit near human level, what might happen after that? And it seems like there might be some evidence of this in our current world, where we’ve seen, for instance, these language models go from sort of understanding which words are really English words and which words aren’t, to being able to produce sentences that seem semantically coherent or whatever. We’ve seen Go AI systems go from strong human amateur to really better than any human. And some other things like some perceptual tasks AI’s gotten better at. I’m wondering, what lessons do you think those hold for this question of take off speeds, or how quickly AI might gain capabilities?", "completion": "So I think when interpreting recent progress, it’s worth trying to split apart the part of progress that comes from increasing scale - to me, this is especially important on the language modeling front and also on the Go front - to split apart the part of process that comes from increasing scale, from progress that’s improvements in underlying algorithms or improvements in computer hardware. Maybe one super quick way to think about that is, if you draw a trend line on how much peak money people are spending for training individual models, you’re getting something like a couple doublings a year right now. And then on the computer hardware side, maybe you’re getting a doubling every couple years. So you could sort of subtract those out and then look at the remainder that’s coming from changes in the algorithms we’re actually running.", "completion": "I think probably the most salient thing is that improvements have been pretty fast. So I guess you’re learning about two things. One is you’re learning about how important are those factors in driving progress, and the other is you’re learning about qualitatively, how much smarter does it feel like your AI is with each passing year? So, I guess, I think that the scaling up part, you’re likely to see a lot of the subjective progress recently comes from scaling up. I think certainly more than half of it comes from scaling up. We could debate exactly what the number is. Maybe it’d be two thirds, or something like that. And so you’re probably not going to continue seeing that as you approach transformative AI, although one way you could have really crazy AI progress or really rapid takeoff is if people had only been working with small AIs, and hadn’t scaled them up to limits of what was possible.", "completion": "That’s obviously looking increasingly unlikely as the training runs that we actually do are getting bigger and bigger. Five years ago, training runs were extremely small. 10 years ago, they were sub GPU scale, significantly smaller than a GPU. Whereas now you have at least like, $10 million training runs. Each order of magnitude there, it gets less likely that we’ll still be doing this rapid scale up at the point when we make this transition to AIs doing most of the work. I’m pretty interested in the question of whether algorithmic progress and hardware progress will be as fast in the future as they are today, or whether they will have sped up or slowed down. I think the basic reason you might expect them to slow down is that in order to sustain the current rate of progress, we are very rapidly scaling up the number of researchers working on the problem.", "completion": "And I think most people would guess that if you held fixed the research community of 2016, they would’ve hit diminishing returns and progress would’ve slowed a lot. So right now, the research community is growing extremely quickly. That’s part of the normal story for why we’re able to sustain this high rate of progress. That, also, we can’t sustain that much longer. You can’t grow the number of ML researchers more than like… maybe you can do three more orders of magnitude, but even that starts pushing it. So I’m pretty interested in whether that will result in progress slowing down as we keep scaling up. There’s an alternative world, especially if transformative AI is developed soon, where we might see that number scaling up even faster as we approach transformative AI than it is right now. So, that’s an important consideration when thinking about how fast the rate of progress is going to be in the future relative to today. I think the scale up is going to be significantly slower.", "completion": "I think it’s unclear how fast the hardware and software progress are going to be relative to today. My best guess is probably a little bit slower. Using up low hanging fruit will eventually be outpacing growth in the research community. And so then, maybe mapping that back onto this qualitative sense of how fast our capability is changing: I do think that each order of magnitude does make systems, in some qualitative sense, a lot smarter. And we kind of know roughly what an order of magnitude gets you. There’s this huge mismatch, that I think is really important, where we used to think of an order of magnitude of compute as just not that important.", "completion": "So for most applications that people spend compute on, compute is just not one of the important ingredients. There’s other bottlenecks that are a lot more important. But we know in the world where AI is doing all the stuff humans are doing, that twice as much compute is extremely valuable. If you’re running your computers twice as fast, you’re just getting the same stuff done twice as quickly. So we know that’s really, really valuable. So being in this world where things are doubling every year, that seems to me like a plausible world to be in, as we approach transformative AI. It would be really fast. But it would be slower than today, but it still just qualitatively, would not take long until you’d move from human parity to way, way above humans. That was all just thinking about the rate of progress now and what that tells us about the rate of progress in the future.", "completion": "And I think that is an important parameter for thinking about how fast takeoff is. I think my basic expectations are really anchored to this one to two year takeoff, because that’s how long it takes AI systems to get a couple times better. And we could talk about, if we want to, why that seems like the core question? Then there’s another question of, what’s the distribution of progress like, and do we see these big jumps, or do we see gradual progress? And there, I think there are certainly jumps. It seems like the jumps are not that big, and are gradually getting smaller as the field grows, would be my guess. I think it’s a little bit hard for me to know exactly how to update from things like the Go results. Mostly because I don’t have a great handle on how large the research community working on computer Go was, prior to the DeepMind effort.", "completion": "I think my general sense is, it’s not that surprising to get a big jump, if it’s coming from a big jump in research effort or attention. And that’s probably most of what happened in those cases. And also a significant part of what’s happened more recently in the NLP case, just people really scaling up the investment, especially in these large models. And so I would guess you won’t have jumps that are that large, or most of the progress comes from boring business as usual progress rather than big jumps. In the absence of that kind of big swing, where people are changing what they’re putting attention into and scaling up R&D in some area a lot."} {"prompt": "So the question is, holding factor inputs fixed, what have we learned about ML progress?", "completion": "So I think one way you can try and measure the rate of progress is you can say, “How much compute does it take us to do a task that used to take however many FLOPS last year? How many FLOPS will it take next year? And how fast is that number falling?” I think on that operationalization, I don’t really know as much as I would like to know about how fast the number falls, but I think something like once a year, like halving every year. I think that’s the right rough ballpark both in ML, and in computer chess or computer Go prior to introduction of deep learning, and also broadly for other areas of computer science. In general you have this pretty rapid progress, according to standards in other fields. It’d be really impressive in most areas to have cost falling by a factor of two in a year. And then that is kind of part of the picture. Another part of the picture is like, “Okay, now if I scale up my model size by a factor of two or something, or if I like throw twice as much compute at the same task, rather than try to do twice as many things, how much more impressive is my performance with twice the compute?”", "completion": "I think it looks like the answer is, it’s a fair bit better. Having a human with twice as big a brain looks like it would be a fair bit better than having a human thinking twice as long, or having two humans. It’s kind of hard to estimate from existing data. But I often think of it as, roughly speaking, doubling your brain size is as good as quadrupling the number of people or something like that, as a vague rule of thumb. So the rate of progress then in some sense is even faster than you’d think just from how fast costs are falling. Because as costs fall, you can convert that into these bigger models, which are sort of smarter per unit in addition to being cheaper."} {"prompt": "So we’ve been broadly talking about the potential really big risk to humanity of AI systems becoming really powerful, and doing stuff that we don’t want. So we’ve recently been through this COVID-19 global pandemic. We’re sort of exiting it, at least in the part of the world where you and I are, the United States. Some people have taken this to be relevant evidence for how people would react in the case of some AI causing some kind of disaster. Would we make good decisions, or what would happen? I’m wondering, do you think, in your mind, do you think this has been relevant evidence of what would go down, and to what degree has it changed your beliefs? Or perhaps epitomized things you thought you already knew, but you think other people might not know?", "completion": "Yeah. I had a friend analogize this experience to some kind of ink blot test. Where everyone has the lesson they expected to draw, and they can all look at the ink blot and see the lesson they wanted to extract. I think a way my beliefs have changed is it feels to me that our collective response to COVID-19 has been broadly similar to our collective response to other novel problems. When humans have to do something, and it’s not what they were doing before, they don’t do that hot. I think there’s some uncertainty over the extent to which we have a hidden reserve of ability to get our act together, and do really hard things we haven’t done before. That’s pretty relevant to the AI case. Because if things are drawn out, there will be this period where everyone is probably freaking out. Where there’s some growing recognition of a problem, but where we need to do something different than we’ve done in the past.", "completion": "We’re wondering when civilization is on the line, are we going to get our act together? I remain uncertain about that. The extent to which we have, when it really comes down to it, the ability to get our act together. But it definitely looks a lot less likely than it did before. Maybe I would say the COVID-19 response was down in my 25th percentile or something of how much we got our act together, surprisingly, when stuff was on the line. It involved quite a lot of everyone having their lives massively disrupted, and a huge amount of smart people’s attention on the problem. But still, I would say we didn’t fare that well, or we didn’t manage to dig into some untapped reserves of ability to do stuff. It’s just hard for us to do things that are different from what we’ve done before.", "completion": "That’s one thing. Maybe a second update, that’s a side in an argument I’ve been on that I feel like should now be settled forevermore, is sometimes you’ll express concern about AI systems doing something really bad and people will respond in a way that’s like, “Why wouldn’t future people just do X? Why would they deploy AI systems that would end up destroying the world?” Or, “Why wouldn’t they just use the following technique, or adjust the objective in the following way?” And I think that in the COVID case, our response has been extremely bad compared to sentences of the form, “Why don’t they just…” There’s a lot of room for debate over how well we did collectively, compared to where expectations should have been. But I think there’s not that much debate of the form, if you were telling a nice story in advance, there are lots of things you might have expected “we would just…”", "completion": "And so I do think that one should at least be very open to the possibility that there will be significant value at stake, potentially our whole future. But we will not do things that are in some sense, obvious responses to make the problem go away. I think we should all be open to the possibility of a massive failure on an issue that many people are aware of. Due to whatever combination of, it’s hard to do new things, there are competing concerns, random basic questions become highly politicized, there’s institutional issues, blah blah blah. It just seems like it’s now very easy to vividly imagine that. I think I have overall just increased my probability of the doom scenario, where you have a period of a couple years of AI stuff heating up a lot. There being a lot of attention. A lot of people yelling. A lot of people very scared. I do think that’s an important scenario to be able to handle significantly better than we handled the pandemic, hopefully. I mean, hopefully the problem is easier than the pandemic. I think there’s a reasonable chance handling the alignment thing will be harder than it would’ve been to completely eradicate COVID-19, and not have to have, large numbers of deaths and lockdowns. I think, if that’s the case, we’d be in a rough spot. Though also, I think it was really hard for the effective altruist community to do that much to help with the overall handling of the pandemic. And I do think that the game is very different, the more you’ve been preparing for that exact case. And I think it was also a helpful illustration of that in various ways."} {"prompt": "So the final thing, before we go into specifically what technical problems we could solve to stop existential risk, back in 2014, this Oxford philosopher, Nick Bostrom, wrote an influential book called Superintelligence. If you look at the current strand of intellectual influence around AI alignment research, I believe it was the first book in that vein to come out. It’s been seven years since 2014, when it was published. I think the book currently strikes some people as somewhat outdated. But it does try to go into what the advance of AI capabilities would perhaps look like, and what kind of risks could that face? So I’m wondering, how do you see your current views as comparing to those presented in Superintelligence, and what do you think the major differences are, if any?", "completion": "I guess when looking at Superintelligence, you could split apart something that’s the actual claims Nick Bostrom is making and the kinds of arguments he’s advancing, versus something that’s like a vibe that overall permeates the book. I think that, first about the vibe, even at that time, I guess I’ve always been very in the direction of expecting AI to look like business as usual, or to progress somewhat in a boring, continuous way, to be unlikely to be accompanied by a decisive strategic advantage for the person who develops it."} {"prompt": "What is a decisive strategic advantage?", "completion": "This is an idea, I think Nick introduced maybe in that book, of the developer of a technology being at the time they develop it, having enough of an advantage over potential competitors, either economic competitors or military competitors, that they can call the shots. And if someone disagrees with the shots that they called, they can just crush them. I think he has this intuition that there’s a reasonable chance that there will be some small part of the world, maybe a country or a firm or whatever, that develops AI, that will then be in such a position that they can just do whatever they want. You can imagine that coming from other technologies as well, and people really often talk about it in the context of transformative AI."} {"prompt": "And so even at the time you were skeptical of this idea that some AI system would get a decisive strategic advantage, and rule the world or something?", "completion": "Yeah. I think that I was definitely skeptical of that as he was writing the book. I think we talked about it a fair amount and often came down the same way: he’d point to the arguments and be like, look, these aren’t really making objectionable assumptions and I’d be like, that’s true. There’s something in the vibe that I don’t quite resonate with, but I do think the arguments are not nearly as far in this direction as part of the vibe. Anyways, there’s some spectrum of how much decisive strategic advantage, hard take off you expect things to be, versus how boring looking, moving slowly, you expect things to be. Superintelligence is not actually at the far end of the spectrum - probably Eliezer and MIRI folks are at the furthest end of that spectrum. Superintelligence is some step towards a more normal looking view, and then many more steps towards a normal looking view, where I think it will be years between when you have economically impactful AI systems and the singularity. Still a long way to get from me to an actual normal view.", "completion": "So, that’s a big factor. I think it affects the vibe in a lot of places. There’s a lot of discussion, which is really, you have some implicit image in the back of your mind and it affects the way you talk about it. And then I guess in the interim, I think my views have, I don’t know how they’ve directionally changed on this question. It hasn’t been a huge change. I think there’s something where the overall AI safety community has maybe moved more, and things seem probably there’ll be giant projects that involve large amounts of investment, and probably there’ll be a run up that’s a little bit more gradual. I think that’s a little bit more in the water than it was when Superintelligence was written.", "completion": "I think some of that comes from shifting who is involved in discussions of alignment. As it’s become an issue more people are talking about, views on the issue have tended to become more like normal person’s views on normal questions. I guess I like to think some of it is that there were some implicit assumptions being glossed over, going into the vibe. I guess Eliezer would basically pin this on people liking to believe comfortable stories, and the disruptive change story is uncomfortable. So everyone will naturally gravitate towards a comfortable, continuous progress story. That’s not my account, but that’s definitely a plausible account for why the vibe has changed a little bit.", "completion": "So that’s one way in which I think the vibe of Superintelligence maybe feels distinctively from some years ago. I think in terms of the arguments, the main thing is just that the book is making what we would now talk about as very basic points. It’s not getting that much into empirical evidence on a question like take off speeds, and is more raising the possibility of, well, it could be the case that AI is really fast at making AI better. And it’s good to raise that possibility. That naturally leads into people really getting more into the weeds and being like, well, how likely is that? And what historical data bears on that possibility, and what are really the core questions? Yeah, I guess my sense, and I haven’t read the book in pretty long time, is that the arguments and claims where it’s more sticking its neck out, just tend to be milder, less in-the-weeds claims. And then the overall vibe is a little bit more in this decisive strategic advantage direction."} {"prompt": "Yeah.", "completion": "I remember discussing with him as he was writing it. There’s one chapter in book on multipolar outcomes, which I found, to me, feels weird. And then I’m like, the great majority of possible outcomes involve lots of actors with considerable power. It’s weird to put that in one chapter."} {"prompt": "Yeah.", "completion": "Where I think his perspective was more like, should we even have that chapter or should we just cut it? We don’t have that much to say about multipolar outcomes per se. He was not reading one chapter on multipolar outcomes as too little, which I think in some way reflects the vibe. The vibe of the book is like, this is a thing that could happen. It’s no more likely than the decisive strategic advantage, or perhaps even less likely, and less words are spilled on it. But I think the arguments don’t really go there, and in some sense, the vibe is not entirely a reflection of some calculated argument Nick believed and just wasn’t saying. Yeah, I don’t know."} {"prompt": "Yeah. It was, interesting. So last year I reread, I think a large part, maybe not all of the book.", "completion": "Oh man, you should call me on all my false claims about Superintelligence then."} {"prompt": "Well, last year was a while ago. One thing I noticed is that at the start of the book, and also whenever he had a podcast interview about the thing, he often did take great pains to say look, amount of time I spend on a topic in the book is not the same thing as my likelihood assessment of it. And yeah, it’s definitely to some degree weighted towards things he thinks he can talk about, which is fine. And he definitely, in a bunch of places says, yeah, X is possible. If this happened, then that other thing would happen. And I think it’s very easy to read likelihood assessments into that that he’s actually just not making.", "completion": "I do think he definitely has some empirical beliefs that are way more on the decisive strategic advantage end of the spectrum, and I do think the vibe can go even further in that direction."} {"prompt": "Yeah, all right. The next thing I’d like to talk about is, what technical problems could cause existential risk and how you think about that space? So yeah, I guess first of all, how do you see the space of which technical problems might cause AI existential risk, and how do you carve that up?", "completion": "I think I probably have slightly different carvings up for research questions that one might work on, versus root cause of failures that might lead to doom."} {"prompt": "Okay.", "completion": "Maybe starting with the root cause of failure. I certainly spend most of my time thinking about alignment or intent alignment. That is, I’m very concerned about a possible scenario where AI systems, basically as an artifact of the way they’re trained, most likely, are trying to do something that’s very bad for humans.", "completion": "For example, AI systems are trying to cause the camera to show happy humans. In the limit, this really incentivizes behaviors like ensuring that you control the camera and you control what pixels or what light is going into the camera, and if humans try and stop you from doing that, then you don’t really care about the welfare of the humans. Anyway, so the main thing I think about is that kind of scenario where somehow the training process leads to an AI system that’s working at cross purposes to humanity.", "completion": "So maybe I think of that as half of the total risk in a transition to, in the sort of early of days of shifting from humans doing the cognitive work to AI, doing the cognitive work. And then there’s another half of difficulties where it’s a little bit harder to say if they’re posed by technical problems or by social ones. For both of these, it’s very hard to say whether the doom is due to technical failure, or due to social failure, or due to whatever. But there are a lot of other ways in which, if you think of human society as the repository of what humans want, the thing that will ultimately go out into space and determine what happens with space, there are lots of ways in which that could get messed up during a transition to AI. So you could imagine that AI will enable significantly more competent attempts to manipulate people, such as with more significantly higher quality rhetoric or argument than humans have traditionally been exposed to. So to the extent that the process of us collectively deciding what we want is calibrated to the arguments humans make, then just like most technologies, AI has some way of changing that process, or some prospect of changing that process, which could lead to ending up somewhere different. I think AI has an unusually large potential impact on that process, but it’s not different in kind from the internet or phones or whatever. I think for all of those things, you might be like, well I care about this thing. Like the humans, we collectively care about this thing, and to the extent that we would care about different things if technology went differently, in some sense, we probably don’t just want to say, whatever way technology goes, that’s the one we really wanted.", "completion": "We might want to look out over all the ways technology could go and say, to the extent there’s disagreement, this is actually the one we most endorse. So I think there’s some concerns like that. I think another related issue is… actually, there’s a lot of issues of that flavor. I think most people tend to be significantly more concerned with the risk of everyone dying than the risk of humanity surviving, but going out into space and doing the wrong thing. There are exceptions of people on the other side who are like, man, Paul is too concerned with the risk of everyone dying and not enough concerned with the risk of doing weird stuff in space, like Wei Dai really often argues for a lot of these risks, and tries to prevent people from forgetting about them or failing to prioritize them enough.", "completion": "Anyway, I think a lot of the things I would list, other than alignment, that loom largest to me are in that second category of humanity survives, but does something that in some alternative world we might have regarded as a mistake. I’m happy to talk about those, but I don’t know if that actually is what you have in mind or what most listeners care about. And I think there’s another category of ways that we go extinct where in some sense AI is not the weapon of extinction or something, but just plays a part in the story. So if AI contributes to the start of a war, and then the war results or escalates to catastrophe.", "completion": "For any catastrophic risk that might face humanity, maybe we might have mentioned this briefly before, technical problems around AI can have an effect on how well humanity handles that problem, so AI can have an effect on how well humanity responds to some sudden change in its circumstances, and a failure to respond well may result in a war escalating, or serious social unrest or climate change or whatever."} {"prompt": "Yeah, okay. I guess I’ll talk a little bit about intent alignment, mostly because that’s what I’ve prepared for the most.", "completion": "That’s also what I spend almost all my time thinking about, so I love talking about intent alignment."} {"prompt": "Okay, so you were saying that you carve up research projects that one could do, and root causes of failure, slightly differently. Was intent alignment a root cause of failure or a research problem?", "completion": "Yeah, I think it’s a root cause of failure."} {"prompt": "Okay, so one question that I have about intent alignment is, it seems it’s sort of relating to this, what I might call a Humean decomposition. This philosopher David Hume said something approximately like, “Look, the thing about the way people work, is that they have beliefs, and they have desires. And beliefs can’t motivate you, only desires can, and the way they produce action is that you try to do actions, which according to your beliefs, will fulfill your desires.” And by talking about intent alignment, it seems you’re sort of imagining something similar for AI systems, but it’s not obviously true that that’s how AI systems work. So in this case, whether it’s optimized, could potentially depend on your background knowledge, right?", "completion": "That’s right. Yeah, different people could disagree. One person could think something is optimizing for A and the other person could think someone is optimizing for not A. That is possible in principle."} {"prompt": "And not only could they think that, they could both be right, in a sense.", "completion": "That’s right. There’s no fact of the matter beyond what the person thinks. And so from that perspective, optimization is mostly something we’re talking about from our perspective as algorithm designers. So when we’re designing the algorithm, we are in this epistemic state, and the thing we’d like to do, is, from our epistemic state, there shouldn’t be this optimization for doom. We shouldn’t end up with these correlations where the algorithm we write is more likely to produce actions that lead to doom. And that’s something where we are retreating. Most of the time we’re designing an algorithm, we’re retreating to some set of things we know and some kind of reasoning we’re doing. Or like, within that universe, we want to eliminate this possible bad correlation."} {"prompt": "Okay.", "completion": "Yeah, this exposes tons of rough edges, which I’m certainly happy to talk about lots of."} {"prompt": "Yeah. One way you could, I guess it depends a bit on whether you’re talking about correlation or mutual information or something, but on some of these definitions, one way you can reduce any dependence is if you know with certainty what the system is going to do. Or perhaps even if I don’t know exactly what’s going to happen, but I know it will be some sort of hell world. And then there’s no correlation, so it’s not optimizing for doom, it sounds like.", "completion": "Yeah. I think the way that I am thinking about that is, I have my robot and my robot’s taken some torques. Or I have my thing connected to the internet and it’s sending some packets. And in some sense we can be in the situation where it’s optimizing for doom, and certainly doom is achieved and I’m merely uncertain about what path leads to doom. I don’t know what packets it’s going to send. And I don’t know what packets lead to doom. If I knew, as algorithm designer, what packets lead to doom, I’d just be like, “Oh, this is an easy one. If the packet is going to suddenly lead to doom, no go.” I don’t know what packets lead to doom, and I don’t know what packets it’s going to output, but I’m pretty sure the ones it’s going to output lead to doom. Or I could be sure they lead to doom, or I could just be like, those are more likely to be doomy ones.", "completion": "And the situation I’m really terrified of as a human is the one where there’s this algorithm, which has the two following properties: one, its outputs are especially likely to be economically valuable to me for reasons I don’t understand, and two, its outputs are especially likely to be doomy for reasons I don’t understand. And if I’m a human in that situation, I have these outputs from my algorithm and I’m like, well, darn. I could use them or not use them. If I use them, I’m getting some doom. If I don’t use them, I’m leaving some value on the table, which my competitors could take."} {"prompt": "In the sense of value where-", "completion": "Like I could run a better company, if I used the outputs. I could run a better company that would have, each year, some probability of doom. And then the people who want to make that trade off will be the ones who end up actually steering the course of humanity, which they then steer to doom."} {"prompt": "Okay. So in that case, maybe the Humean decomposition there is: there’s this correlation between how good the world is or whatever, and what the system does. And the direction of the correlation is maybe going to be the intent or the motivations of the system. And maybe the strength of the correlation, or how tightly you can infer, that’s something more like capabilities or something. Does that seem right?", "completion": "Yeah. I guess I would say that on this Humean perspective, there’s kind of two steps, both of which are, to me, about optimization. One is, we say the system has accurate beliefs, by which we’re talking about a certain correlation. To me, this is also a subjective condition. I say the system correctly believes X, to the extent there’s a correlation between the actual truth of affairs and some representation it has. So one step like that. And then there’s a second step where there’s a correlation between which action it selects, and its beliefs about the consequences of the action. In some sense I do think I want to be a little bit more general than the framework you might use for thinking about humans.", "completion": "In the context of an AI system, there’s traditionally a lot of places where optimization is being applied. So you’re doing stochastic gradient descent, which is itself significant optimization over the weights of your neural network. But then those optimized weights will, themselves, tend to do optimization, because some weights do, and the weights that do, you have optimized towards them. And then also you’re often combining that with explicit search: after you’ve trained your model, often you’re going to use it as part of some search process. So there are a lot of places optimization is coming into this process. And so I’m not normally thinking about the AI that has some beliefs and some desires that decouple, but I am trying to be doing this accounting or being like, well, what is a way in which this thing could end up optimizing for doom?", "completion": "How can we get some handle on that? And I guess I’m simultaneously thinking, how could it actually be doing something productive in the world, and how can it be optimizing for doom? And then trying to think about, is there a way to decouple those, or get the one without the other. But that could be happening. If I imagine an AI, I don’t really imagine it having a coherent set of beliefs. I imagine it being this neural network, such that there are tons of parts of the neural network that could be understood as beliefs about something, and tons of parts of the neural network that could be understood as optimizing. So it’d be this very fragmented, crazy mind. Probably human minds are also like this, where they don’t really have coherent beliefs and desires. But in the AI, we want to stamp out all of the desires that are not helping humans get what they want, or at least, at a minimum, all of the desires that involve killing all the humans."} {"prompt": "Now that I sort of understand intent alignment, sometimes people divide this up into outer and inner versions of intent alignment. Sometimes people talk about various types of robustness that properties could have, or that systems could have. I’m wondering, do you have a favorite of these further decompositions, or do you not think about it that way as much?", "completion": "I mentioned before this or of ands, where there’s lots of different paths you could go down, and then within each path there’ll be lots of breakdowns of what technical problems need to be resolved. I guess I think of outer and inner alignment as: for several of the leaves in this or of ands, or several of the branches in this or of ands, for several of the possible approaches, you can talk about “these things are needed to achieve outer alignment and these things are needed to achieve inner alignment, and with their powers combined we’ll achieve a good outcome”. Often you can’t talk about such a decomposition. In general, I don’t think you can look at a system and be like, “oh yeah, that part’s outer alignment and that part’s inner alignment”. So the times when you can talk about it most, or the way I use that language most often, is for a particular kind of alignment strategy that’s like a two step plan. Step one is, develop an objective that captures what humans want well enough to be getting on with. It’s going to be something more specific, but you have an objective that captures what humans want in some sense. Ideally it would exactly capture what humans want. So, you look at the behavior of a system and you’re just exactly evaluating how good for humans is it to deploy a system with that behavior, or something. So you have that as step one and then that step would be outer alignment. And then step two is, given that we have an objective that captures what humans want, let’s build a system that’s internalized that objective in some sense, or is not doing any other optimization beyond pursuit of that objective."} {"prompt": "And so in particular, the objective is an objective that you might want the system to adopt, rather than an objective over systems?", "completion": "Yeah. I mean, we’re sort of equivocating in this way that reveals problematicness or something, but the first objective is an objective. It is a ranking over systems, or some reward that tells us how good a behavior is. And then we’re hoping that the system then adopts that same thing, or some reflection of that thing, like with a ranking over policies. And then we just get the obvious analog of that over actions."} {"prompt": "And so you think of these as different subproblems to the whole thing of intent alignment, rather than objectively, oh, this system has an outer alignment problem, but the inner alignment’s great, or something?", "completion": "Yeah, that’s right. I think this makes sense on some approaches and not on other approaches. I am most often thinking of it as: there’s some set of problems that seem necessary for outer alignment. I don’t really believe that the problems are going to split into “these are the outer alignment problems, and these are the inner alignment problems”. I think of it more as the outer alignment problems, or the things that are sort of obviously necessary for outer alignment, are more likely to be useful stepping stones, or warm up problems, or something. I suspect in the end, it’s not like we have our piece that does outer alignment and our piece that does inner alignment, and then we put them together.", "completion": "I think it’s more like, there were a lot of problems we had to solve. In the end, when you look at the set of problems, it’s unclear how you would attribute responsibility. There’s no part that’s solving outer versus inner alignment. But there were a set of sub problems that were pretty useful to have solved. It’s just, the outer alignment thing here is acting as an easy, special case to start with, or something like that. It’s not technically a special case. There’s actually something worth saying there probably, which is, it’s easier to work on a special case, than to work on some vaguely defined, “here’s a thing that would be nice”. So I do most often, when I’m thinking about my research, when I want to focus on sub problems to specialize on the outer alignment part, which I’m doing more in this warmup problem perspective, I think of it in terms of high stakes versus low stakes decisions.", "completion": "So in particular, if you’ve solved what we’re describing as outer alignment, if you have a reward function that captures what humans care about well enough, and if the individual decisions made by your system are sufficiently low stakes, then it seems like you can get a good outcome just by doing online learning. That is, you constantly retrain your system as it acts. And it can do bad things for a while as it moves out of distribution, but eventually you’ll fold that data back into the training process. And so if you had a good reward function and the stakes are low, then you can get a good outcome. So when I say that I think about outer alignment as a subproblem, I mostly mean that I ignore the problem of high stakes decisions, or fast acting catastrophes, and just focus on the difficulties that arise, even when every individual decision is very low stakes."} {"prompt": "Firstly, does it behave competently, and then does it continue to reliably achieve the stuff that you wanted? And that’s supposed to be more like inner alignment, because if the system had really internalized the objective, then it would supposedly continue pursuing it in later places. And there are some distinctions between that and, especially the frame where alignment is supposed to be about: are you representing this objective in your head? And I’m wondering how do you think about the differences between those frames or whether you view them as basically the same thing?", "completion": "I think I don’t view them as the same thing. I think of those two splits and then a third split, I’ll allude to briefly of avoiding very fast catastrophes versus average case performance. I think of those three splits as just all roughly agreeing. There will be some approaches where one of those splits is a literal split of the problems you have to solve, where it literally factors into doing one of those and then doing the other. I think that the exact thing you stated is a thing people often talk about, but I don’t think it really works even as a conceptual split, quite. Where the main problem is just, if you train AI systems to do well in some distribution, there’s two big, related limitations you get.", "completion": "One is that doesn’t work off distribution. The other is just that, you only have an average case property over that distribution. So it seems in the real world, it is actually possible, or it looks like it’s almost certainly going to be possible, for deployed AI systems to fail quickly enough that the actual harm done by individual bad decisions is much too large to bound with an average case guarantee.", "completion": "So you can imagine the system which appears to work well on distribution, but actually with one in every quadrillion decisions, it just decides now it’s time to start killing all the humans, and that system is quite bad. And I think that in practice, probably it’s better to lump that problem in with distributional shift, which kind of makes sense. And maybe people even mean to include that - it’s a little bit unclear exactly what they have in mind, but distributional shift is just changing the probabilities of outcomes. And the concern is really just things that were improbable under your original distribution. And you could have a problem either because you’re in a new distribution where those things go from being very rare to being common, or you could have a problem just because they were relatively rare, so you didn’t encounter any during training, but if you keep sampling, even on distribution, eventually one of those will get you and cause trouble."} {"prompt": "Maybe they were literally zero in the data set you drew, but not in the “probability distribution” that you drew your data set from.", "completion": "Yeah, so I guess maybe that is fair. I really naturally reach for the underlying probability distribution, but I think out of distribution, in some sense, is most likely to be our actual split of the problem if we mean the empirical distribution over the actual episodes at hand. Anyway, I think of all three of those decompositions, then. That was a random caveat on the out of distribution one."} {"prompt": "Sure.", "completion": "I think of all of those related breakdowns. My guess is that the right way of going doesn’t actually respect any of those breakdowns, and doesn’t have a set of techniques that solve one versus the other. But I think it is very often helpful. It’s just generally, when doing research, helpful to specialize on a subproblem. And I think often one branch or the other of one of those splits is a helpful way to think about the specialization you want to do, during a particular research project. The splits I most often use are this low stakes one where you can train online and individual decisions are not catastrophic, and the other arm of that split is: suppose you have the ability to detect a catastrophe if one occurs, or you trust your ability to assess the utility of actions. And now you want to build a system which doesn’t do anything catastrophic, even when deployed in the real world on a potentially different distribution, encountering potentially rare failures.", "completion": "That’s the split I most often use, but I think none of these are likely to be respected by the actual list of techniques that together address the problem. But often one half or the other is a useful way to help zoom in on what assumptions you want to make during a particular research project."} {"prompt": "And why do you prefer that split?", "completion": "I think most of all, because it’s fairly clear what the problem statement is. So the problem statement, there, is just a feature of the thing outside of your algorithm. Like, you’re writing some algorithm. And then your problem statement is, “Here is a fact about the domain in which you’re going to apply the algorithm.” The fact is that it’s impossible to mess things up super fast. And it’s nice to have a problem statement which is entirely external to the algorithm. If you want to just say, “here’s the assumption we’re making now; I want to solve that problem”, it’s great to have an assumption on the environment be your assumption. There’re some risk if you say, “Oh, our assumption is going to be that the agent’s going to internalize whatever objective we use to train it.” The definition of that assumption is stated in terms of, it’s kind of like helping yourself to some sort of magical ingredient. And, if you optimize for solving that problem, you’re going to push into a part of the space where that magical ingredient was doing a really large part of the work. Which I think is a much more dangerous dynamic. If the assumption is just on the environment, in some sense, you’re limited in how much of that you can do. You have to solve the remaining part of the problem you didn’t assume away. And I’m really scared of sub-problems which just assume that some part of the algorithm will work well, because I think you often just end up pushing an inordinate amount of the difficulty into that step."} {"prompt": "What do you think about this approach to research where you’re just like, “Okay, let’s like figure out these basic problems and try and get a good formalism that we can work from, from there on.”", "completion": "I think, yeah. This is mostly a methodological question, probably, rather than a question about the situation with respect to AI, although it’s not totally clear; there may be differences in belief about AI that are doing the real work, but methodologically I’m very drawn - Suppose you want to understand better, what is optimization? Or you have some very high level question like that. Like, what is bounded rationality? I am very drawn to an approach where you say, “Okay, we think that’s going to be important down the line.” I think at some point, as we’re trying to solve alignment, we’re going to really be hurting for want of an understanding of bounded rationality. I really want to just be like, “Let’s just go until we get to that point, until we really see what problem we wanted to solve, and where it was that we were reaching for this notion of bounded rationality that we didn’t have.”", "completion": "And then at that point, we will have some more precise specification of what we actually want out of this theory of bounded rationality."} {"prompt": "Okay.", "completion": "And I think that is the moment to be trying to dig into those concepts more. I think it’s scary to try and go the other way. I think it’s not totally crazy at all. And there are reasons that you might prefer it. I think the basic reason it’s scary is that there’s probably not a complete theory of everything for many of these questions. There’s a bunch of questions you could ask, and a bunch of answers you get that would improve your understanding. But we don’t really have a statement of what it is we actually seek. And it’s just a lot harder to research when you’re like, I want to understand. Though in some domains, this is the right way to go.", "completion": "And that’s part of why it might come down to facts about AI, whether it’s the perfect methodology in this domain. But I think it’s tough to be like, “I don’t really know what I want to know about this thing. I’m just kind of interested in what’s up with optimization”, and then researching optimization. Relative to being like, “Oh, here’s a fairly concrete question that I would like to be able to answer, a fairly concrete task I’d like to be able to address. And which I think is going to come down to my understanding of optimization.” I think that’s just an easier way to better understand what’s up with optimization."} {"prompt": "Yeah. So at these moments where you realize you need a better theory or whatever, are you imagining them looking like, “Oh, here’s this technical problem that I want to solve and I don’t know how to, and it reminds me of optimization?” Or, what does the moment look like when you’re like, “Ah, now’s the time.”", "completion": "I think the way the whole process most often looks is: you have some problem. The way my research is organized, it’s very much like, “Here’s the kind of thing our AI could learn”, for which it’s not clear how our aligned AI learned something that’s equally useful. And I think about one of these cases and dig into it. And I’m like, “Here’s what I want. I think this problem is solvable. Here’s what I think the aligned AI should be doing.”", "completion": "And I’m thinking about that. And then I’m like, “I don’t know how to actually write down the algorithm that would lead to the aligned AI doing this thing.” And walking down this path, I’m like, “Here’s a piece of what it should be doing. And here’s a piece of how the algorithm should look.”", "completion": "And then at some point you step back and you’re like, “Oh wow. It really looks like what I’m trying to do here is algorithmically test for one thing being optimized over another”, or whatever. And that’s a particularly doomy sounding example. But maybe I have some question like that. Or I’m wondering, “What is it that leads to the conditional independences the human reports in this domain. I really need to understand that better.” And I think it’s the most often for me not then to be like, “Okay, now let’s go understand that question. Now that it’s come up.” It’s most often, “Let us flag and try and import everything that we know about that area.” I’m now asking a question that feels similar to questions people have asked before. So I want to make sure I understand what everyone has said about that area.", "completion": "This is a good time to read up on everything that looks like it’s likely to be relevant. The reading up is cheap to do in advance. So you should be trigger happy with that one. But then there’s no actual pivot into thinking about the nature of optimization. It’s just continuing to work on this problem. Some of those lemmas may end up feeling like statements about optimization, but there was no step where you were like, “Now it’s time to think about optimization.” It’s just like, “Let us keep trying to design this algorithm, and then see what concepts fall out of that.”"} {"prompt": "And you mentioned that there were some domains where, actually thinking about the fundamentals early on was the right thing to do. Which domains are you thinking of? And what do you see as the big differences between those ones and AI alignment?", "completion": "So I don’t know that much about the intellectual history of almost any fields. The field I’m most familiar with by far is computer science. I think in computer science, especially - so my training is in theoretical computer science and then I spend a bunch of time working in machine learning and deep learning - I think the problem first perspective just generally seems pretty good. And I think to the extent that “let’s understand X” has been important, it’s often at the problem selection stage, rather than “now we’re going to research X in an open-ended way”. It’s like, “Oh, X seems interesting. And this problem seems to shed some light on X. So now that’s a reason to work on this problem.” Like, that’s a reason to try and predict this kind of sequence with ML or whatever. It’s a reason to try and write an algorithm to answer some question about graphs.", "completion": "So I think in those domains, it’s not that often the case, that you just want to start off and have some high big picture question, and then think about it abstractly. My guess would be that in domains where more of the game is walking up to nature and looking at things and seeing what you see, it’s a little bit different. It’s not as driven as much by you’re coming up with an algorithm and running into constraints in designing an algorithm. I don’t really know that much about the history of science though. So I’m just guessing that that might be a good approach sometimes."} {"prompt": "All right. So, we’ve talked a little bit about the way you might decompose inner alignment, or the space of dealing with existential risk, into problems, one of which is inner alignment. I’d like to talk a little bit on a high level about your work on the solutions to these problems, and other work that people have put out there. So the first thing I want to ask is: as I mentioned, I’m in a research group, and a lot of what we do is think about how a machine learning system could learn some kind of objective from human data. So perhaps there’s some human who has some desires, and the human acts a certain way because of those desires. And we use that to do some kind of inference. So this might look like inverse reinforcement learning. A simple version of it might look like imitation learning. And I’m wondering what you think of these approaches for things that look more like outer alignment, more like trying to specify what a good objective is.", "completion": "So broadly, I think there are two kinds of goals you could be trying to serve with work like that. For me, there’s this really important distinction as we try and incorporate knowledge that a human demonstrator or human operator lacks. The game changes as you move from the regime where you could have applied imitation learning, in principle, because the operator could demonstrate how to do the task, to the domain where the operator doesn’t understand how to do the task. At that point, they definitely aren’t using imitation learning. And so from my perspective, one thing you could be trying to do with techniques like this, is work well in that imitation learning regime. In the regime where you could have imitated the operator, can you find something that works even better than imitating the operator? And I am pretty interested in that. And I think that imitating the operator is not actually that good a strategy, even if the operator is able to do the task in general. So I have worked some on reinforcement learning from human feedback in this regime. So imagine there’s a task where a human understands what makes performance good or bad: just have the human evaluate individual trajectories, learn to predict those human evaluations, and then optimize that with RL.", "completion": "I think the reason I’m interested in that technique in particular is I think of it as the most basic thing you can do, or that most makes clear exactly what the underlying assumption is that is needed for the mechanism to work. Namely, you need the operator to be able to identify which of two possible executions of a behavior is better. Anyway, there’s then this further thing. And I don’t think that that approach is the best approach. I think you can do better than asking the human operator, “which of these two is better”.", "completion": "I think it’s pretty plausible that basically past there, you’re just talking about data efficiency, like how much human time do you need and so on, and how easy is it for the human, rather than a fundamental conceptual change. But I’m not that confident of that. There’s a second thing you could want to do where you’re like, “Now let’s move into the regime where you can’t ask the human which of these two things is better, because in fact, one of the things the human wants to learn about is which of these two behaviors is better. The human doesn’t know; they’re hoping AI will help them understand.”"} {"prompt": "Actually what’s the situation in which we might want that to happen?", "completion": "Might want to move beyond the human knowing?"} {"prompt": "Yeah. So suppose we want to get to this world where we’re not worried about AI systems trying to kill everyone.", "completion": "Mhm."} {"prompt": "And we can use our AI systems to help us with that problem, maybe. Can we somehow get to some kind of world where we’re not going to build really smart AI systems that want to destroy all value in the universe, without solving these kinds of problems where it’s difficult for us to evaluate which solutions are right?", "completion": "I think it’s very unclear. I think eventually, it’s clear that AI needs to be doing these tasks that are very hard for humans to evaluate which answer is right. But it’s very unclear how far off that is. That is, you might first live in a world where AI has had a crazy transformative impact before AI systems are regularly doing things that humans can’t understand. Also there are different degrees of “beyond a human’s ability to understand” what the AI is doing. So I think that’s a big open question, but in terms of the kinds of domains where you would want to do this, there’s generally this trade-off between over what horizon you evaluate behavior, or how much you rely on hindsight, and how much do you rely on foresight, or the human understanding which behavior will be good."} {"prompt": "Yep.", "completion": "So the more you want to rely on foresight, the more plausible it is that the human doesn’t understand well enough to do the operation. So for example, if I imagine my AI is sending an email for me. One regime is the regime where it’s basically going to send the email that I like most. I’m going to be evaluating either actually, or it’s going to be predicting what I would say to the question, “how good is this email?” And it’s going to be sending the email for which Paul would be like, “That was truly the greatest email.” The second regime where I send the email and then my friend replies, and I look at the whole email thread that results, and I’m like, “Wow, that email seemed like it got my friend to like me, I guess that was a better email.” And then there’s an even more extreme one where then I look back on my relationship with my friend in three years and I’m like, “Given all the decisions this AI made for me over three years, how much did they contribute to building a really lasting friendship?”", "completion": "I think if you’re going into the really short horizon where I’m just evaluating an email, it’s very easy to get to the regime where I think AI can be a lot better than humans at that question. Just like, it’s very easy for there to be empirical facts and be like, “What kind of email gets a response?” Or “What kind of email will be easily understood by the person I’m talking to?” Where an AI that has sent a hundred billion emails, will just potentially have a big advantage over me as a human. And then as you push out to longer horizons, it gets easier for me to evaluate, it’s easier for a human to be like, “Okay, the person says they understood.” I can evaluate the email in light of the person’s response as well as an AI could.", "completion": "But as you move out to those longer horizons, then you start to get scared about that evaluation. It becomes scarier to deal with. There starts to be more room for manipulation of the metrics that I use. I’m saying all that to say, there’s this general factor of, when we ask like “Are AI systems needing to do things that humans couldn’t evaluate which of the two behaviors is better”, it depends a lot how long we make the behaviors, and how much hindsight we give to human evaluators."} {"prompt": "Okay.", "completion": "And in general, that’s part of the tension or part of the game. We can make the thing clear by just talking about really long horizon behaviors. So if I’m like, we’re going to write an infrastructure bill, and I’m like, “AI, can you write an infrastructure bill for me?”", "completion": "It’s very, very hard for me to understand which of two bills is better. And there is the thing where again, in the long game, you do want AI systems helping us as a society to make that kind of decision much better than we would if it was just up to humans to look at the bill, or even a thousand humans looking at the bill. It’s not clear how early you need to do that. I am particularly interested in all of the things humans do to keep society on track. All of the things we do to manage risks from emerging technologies, all the things we do to cooperate with each other, et cetera. And I think a lot of those do involve… more are more interested in AI because they may help us make those decisions better, rather than make them faster. And I think in cases where you want something more like wisdom, it’s more likely that the value added, if AI is to add value, will be in ways that humans couldn’t easily evaluate."} {"prompt": "Yeah. So we were talking about imitation learning or inverse reinforcement learning. So looking at somebody do a bunch of stuff and then trying to infer what they were trying to do. We were talking about, there are these solutions to outer alignment, and you were saying, yeah, it works well for things where you can evaluate what’s going to happen, but for things that can’t… and I think I cut you off around there.", "completion": "Yeah, I think that’s interesting. I think you could have pursued this research. Either trying to improve the imitation learning setting, like “Look, imitation learning actually wasn’t the best thing to do, even when we were able to demonstrate.” I think that’s one interesting thing to do, which is the context where I’ve most often thought about this kind of thing. A second context is where you want to move into this regime where a human can’t say which thing is better or worse. I can imagine, like you’ve written some bill, and we’re like, how are we going to build an AI system that writes good legislation for us? In some sense, actually the meat of the problem is not writing up the legislation, it’s helping predict which legislation is actually good. We can sort of divide the problem into those two pieces. One is an optimization problem, and one is a prediction problem. And for the prediction component, that’s where it’s unclear how you go beyond human ability. It’s very easy to go beyond human ability on the optimization problem: just dump more compute into optimizing.", "completion": "I think you can still try and apply things like inverse reinforcement learning though. You can be like: “Humans wrote a bunch of bills. Those bills were imperfect attempts to optimize something about the world. You can try and back out from looking at not only those bills, but all the stories people write, all the words they say, blah, blah, blah.” We can try and back out what it is they really wanted, and then give them a prediction of how well the bill will achieve what they really wanted? And I think that is particularly interesting. In some sense, that is, from a long-term safety perspective more interesting than the case where the human operator could have understood the consequences of the AI’s proposals. But I am also very scared. I don’t think we currently have really credible proposals for inverse reinforcement learning working well in that regime."} {"prompt": "What’s the difficulty of that?", "completion": "So I think the hardest part is I look at some human behaviors, and the thing I need to do is disentangle which aspects of human behavior are limitations of the human - which are things the human wishes about themselves they could change - and which are reflections of what they value. And in some sense, in the imitation learning regime, we just get to say “Whatever. We don’t care. We’re getting the whole thing. If the humans make bad predictions, we get bad predictions.” In the inverse reinforcement learning case, we need to look at a human who is saying these things about what they want over the long-term or what they think will happen over the long-term, and we need to decide which of them are errors. There’s no data that really pulls that apart cleanly. So it comes down to either facts about the prior, or modeling assumptions.", "completion": "And so then, the work comes down to how much we trust those modeling assumptions in what domains. And I think my basic current take is: the game seems pretty rough. We don’t have a great menu of modeling assumptions available right now. I would summarize the best thing we can do right now as, in this prediction setting, amounting to: train AI systems to make predictions about all of the things you can easily measure. Train AI systems to make judgements in light of AI systems’ predictions about what they could easily measure, or maybe judgements in hindsight, and then predict those judgements in hindsight.", "completion": "Maybe the prototypical example of this is, train an AI system to predict a video of the future. Then have humans look at the video of the future and decide which outcome they like most. I think the reason to be scared of like the most developed form of this, so the reason I’m scared of the most developed form of this, is we are in the situation now where AI really wants to push on this video of the future that’s going to get shown to the human. And distinguishing between the video of the future that gets shown to the human and what’s actually happening in the world, seems very hard.", "completion": "I guess that’s, in some sense, the part of the problem I most often think about. So either looking forward to a future where it’s very hard for a human to make heads or tails of what’s happening, or a future where a human believes they can make heads and tails of what’s happening, but they’re mistaken about that. For example, a thing we might want our AIs to help us do is to keep the world sane, and make everything make sense in the world. So if our AI shows a several videos of the future, and nine of them are incomprehensible and one of them makes perfect sense, we’re like, “Great, give me the future that makes perfect sense.” And the concern is just, do we get there by having an AI which is instead of making the world make sense, is messing with our ability to understand what’s happening in the world? So we just, see the kind of thing we wanted to see or expected to see. And, to the extent that we’re in an outer alignment failure scenario, that’s kind of what I expect failures to ultimately look like."} {"prompt": "So in the realm of things roughly like outer alignment, or alignment dealing with low stakes, repeatable problems, what kind of solutions are you most interested in from a research perspective?", "completion": "I don’t have a very short answer to this question. So I guess you’ll get a kind of long answer to this question."} {"prompt": "That in itself is interesting.", "completion": "Yeah. And maybe there’s also two kinds of answers I can give. One is like the thing that I am most animated by, that I am working on myself. Another is a broader, here are kinds of things people do in the world that I’m particularly excited by, amongst existing research directions. Maybe my default would be to go through some of the things people do in the world that I’m excited by, and then turn to the thing I’m most animated by but I’d be happy to do the other order if that seems better."} {"prompt": "Let’s try in the first order.", "completion": "I guess one thing that seems like it comes up constantly as a useful building block, or an essential ingredient in many possible plans, which also seems both tractable to work on and really hard, is interpretability. So we’re very frequently in a situation where we’ve trained some very large neural network. We know that it’s able to make good predictions in some domain, and we’re not really able to understand what it knows about that domain. Sometimes we’re able to play some clever game and say something about why it’s making the prediction it’s making, or what kind of thing it knows about or doesn’t know about. But for the most part, our methods there are very similar to just doing some kind of behavioral analysis where we’re like, “Oh, if you change this part of the input it gets it wrong. So apparently that’s what it’s paying attention to.” I think there’s some hope for techniques that are more mechanically looking at what computation is performed by the model, and then somehow understanding something about what it has learned, so that you can better understand whether predictions it’s making are reasonable, et cetera. So I guess that’s just something I’m quite interested in, to the extent that we’re able to make headway on it."} {"prompt": "Okay. And how does that help in these outer alignment type settings?", "completion": "Yeah. So I think the biggest thing is that, imagine your model again, which is predicting videos from the future, and you’d like to distinguish the case where actually everything in the future is great, versus the case where actually the future is terrible, but there’s a nice little village set up in front of the camera. We’re concerned about models, which are deliberately obfuscating what’s happening on camera. That is AIs which are deliberately planning to put up the nice little village: they’re building the houses, they’re ensuring the camera doesn’t go out of the village, etc."} {"prompt": "Yeah.", "completion": "This is a very crude metaphor, but the AI which is deliberately doing that, which is choosing actions from this tiny space of actions to engineer this very specific outcome, in some sense, somewhere deep in its heart, it understands a lot of what’s happening in the world. It understands that if the camera turned just this way, it would see something objectionable, so it doesn’t let it do that. And so it feels like if you have, in some sense, it doesn’t even feel like that much to ask of your interpretability tools to be able to reach inside and be like, “Oh, okay. Now if we look at what it’s thinking, clearly there’s this disconnect between what’s happening in the world and what’s reported to the human.” And I don’t think there are that many credible approaches for that kind of problem, other than some kind of headway on interpretability. So yeah, I guess that’s my story about how it helps."} {"prompt": "Okay.", "completion": "I think there are many possible stories about how it helps. That’s the one I’m personally most interested in."} {"prompt": "All right. So that’s one approach that you like.", "completion": "I mean, I think in terms of what research people might do, I’m just generally very interested in taking a task that is challenging for humans in some way, and trying to train AI systems to do that task, and seeing what works well, seeing how we can help humans push beyond their native ability to evaluate proposals from an AI. And tasks can be hard for humans in lots of ways. You can imagine having lay humans evaluating expert human answers to questions and saying, “How can we build an AI that helps expose this kind of expertise to a lay human?”", "completion": "The interesting thing is the case where you don’t have any trusted humans who have that expertise, where we as a species are looking at our AI systems and they have expertise that no humans have. And we can try and study that today by saying, “Imagine a case where the humans who are training the AI system, lack some expertise that other humans have.” And it gives us a nice little warm up environment in some sense."} {"prompt": "Okay.", "completion": "You could have the experts come in and say, “How well did you do?” You have gold standard answers, unlike in the final case. There’s other ways tasks can be hard for humans. You can also consider tasks that are computationally demanding, or involve lots of input data; tasks where human abilities are artificially restricted in some way; you could imagine people who can’t see are training an ImageNet model to tell them about scenes in natural language."} {"prompt": "Okay.", "completion": "Again, the model is that there are no humans who can see. You could ask, “Can we study this in some domain?” and the analogy would be that there’s no humans who can see. Anyway, so there’s I think a whole class of problems there, and then there’s a broader distribution over what techniques you would use for attacking those problems. I am very interested in techniques where AI systems are helping humans do the evaluation. So kind of imagine this gradual inductive process where as your AI gets better, they help the humans answer harder and harder questions, which provides training data to allow the AIs to get ever better. I’m pretty interested in those kinds of approaches, which yeah, there are a bunch of different versions, or a bunch of different things along those lines.", "completion": "It was the second category, so interpretability, we have using AIs to help train AIs."} {"prompt": "Yep. There was also, what you were working on.", "completion": "The last category I’d give is just, I think even again in this sort of more imitation learning regime or in the regime where humans can tell what is good: doing things effectively, learning from small amounts of data, learning policies that are higher quality. That also seems valuable. I am more optimistic about that problem getting easier as AI systems improve, which is the main reason I’m less scared of our failure to solve that problem, than failure to solve the other two problems. And then maybe the fourth category is just, I do think there’s a lot of room for sitting around and thinking about things. I mean, I’ll describe what I’m working on, which is a particular flavor of sitting around and thinking about things."} {"prompt": "Sure.", "completion": "But there’s lots of flavors of sitting around and thinking about, “how would we address alignment” that I’m pretty interested in."} {"prompt": "All right.", "completion": "Onto the stuff that I’m thinking about?"} {"prompt": "Let’s go.", "completion": "To summarize my current high level hope/plan/whatever, we’re concerned about the case where SGD, or Stochastic Gradient Descent, finds some AI system that embodies useful knowledge about the world, or about how to think, or useful heuristics for thinking. And also uses it in order to achieve some end: it has beliefs, and then it selects the action that it expects will lead to a certain kind of consequence. At a really high level, we’d like to, instead of learning a package which potentially couples that knowledge about the world with some intention that we don’t like, we’d like to just throw out the intention and learn the interesting knowledge about the world. And then we can, if we desire, point that in the direction of actually helping humans get what they want.", "completion": "At a high level, the thing I’m spending my time on is going through examples of the kinds of things that I think gradient descent might learn, for which it’s very hard to do that decoupling. And then for each of them, saying, “Okay, what is our best hope?” or, “How could we modify gradient descent so that it could learn the decoupled version of this thing?” And they’ll be organized around examples of cases where that seems challenging, and what the problems seem to be there. Right now, the particular instance that I’m thinking about most and have been for the last three to six months, is the case where you learn either facts about the world or a model of the world, which are defined, not in terms of human abstractions, but some different set of abstractions. As a very simple example that’s fairly unrealistic, you might imagine humans thinking about the world in terms of people and cats and dogs. And you might imagine a model which instead thinks about the world in terms of atoms bouncing around.", "completion": "So the concerning case is when we have this mismatch between the way your beliefs or your simulation or whatever of the world operates, and the way that human preferences are defined, such that it is then easy to take this model and use it to, say, plan for goals that are defined in terms of concepts that are natural to it, but much harder to use it to plan in terms of concepts that are natural to humans.", "completion": "So I can have my model of atoms bouncing around and I can say, “Great, search over actions and find the action that results in the fewest atoms in this room.” And it’s like, great. And then it can just enumerate a bunch of actions and find the one that results in the minimal atoms. And if I’m like, “Search for one where the humans are happy.” It’s like, “I’m sorry. I don’t know what you mean about humans or happiness.” And this is kind of a subtle case to talk about, because actually that system can totally carry on a conversation about humans or happiness. That is, at the end of the day, there are these observations, we can train our systems to make predictions of what are the actual bits that are going to be output by this camera."} {"prompt": "Yep.", "completion": "And so it can predict human faces walking around and humans saying words. It can predict humans talking about all the concepts they care about, and it can predict pictures of cats, and it can predict a human saying, “Yeah, that’s a cat.” And the concern is more that, basically you have your system which thinks natively in terms of atoms bouncing around or some other abstractions. And when you ask it to talk about cats or people, instead of getting it talking about actual cats or people, you get talking about when a human would say there is a cat or a person. And then if you optimize for “I would like a situation where all the humans are happy.” What you instead get is a situation where there are happy humans on camera. And so you end up back in the same kind of concern that you could have had, of your AI system optimizing to mess with your ability to perceive the world, rather than actually making the world good."} {"prompt": "So, when you say that you would like this kind of decoupling, the case you just described is one where it’s hard to do the decoupling. What’s a good example of, “Here we decoupled the motivation from the beliefs. And now I can insert my favorite motivation and press go.” What does that look like?", "completion": "So I think a central example for me, or an example I like, would be a system which has some beliefs about the world, represented in a language you’re familiar with. They don’t even have to be represented that way natively. Consider an AI system, which learns a bunch of facts about the world. It learns some procedure for deriving new facts from old facts, and learns how to convert whatever it observes into facts. It learns some, maybe opaque model that just converts what it observes into facts about the world. It then combines them with some of the facts that are baked into it by gradient descent. And then it turns the crank on these inference rules to derive a bunch of new facts. And then at the end, having derived a bunch of facts, it just tries to find an action such that it’s a fact that that action leads to the reward button being pushed.", "completion": "So there’s like a way you could imagine. And it’s a very unrealistic way for an AI to work, just as basically every example we can describe in a small number of words is a very unrealistic way for a deep neural network to work. Once I have that model, I could hope to, instead of having a system which turns the crank, derives a bunch of facts, then looks up a particular kind of facts, and finally takes it to take an action; instead, it starts from the statements, turns the crank, and then just answers questions, or basically directly translates the statements in its internal language into natural language. If I had that, then instead of searching over “the action leads to the reward button being pressed”, I can search over a bunch of actions, and for each of them, look at the beliefs it outputs, in order to assess how good the world is, and then search for one where the world is good according to humans.", "completion": "And so the key dynamic is, how do I expose all this “turning the crank on facts”? How do I expose the facts that it produces to humans in a form that’s usable for humans? And this brings us back to amplification or debate, these two techniques that I’ve worked on in the past, in this genre of like AI, helping humans evaluate AI behavior."} {"prompt": "Yep.", "completion": "Right. A way we could hope to train an AI to do that, we could hope to have almost exactly the same process of SGD that produced the original reward button maximizing system. We’d hope to, instead of training it to maximize the reward button, train it to give answers that humans like, or answers that humans consider accurate and useful. And the way humans are going to supervise it is basically, following along step wise with the deductions it’s performing as it turns this crank of deriving new facts from old facts.", "completion": "So it had some facts at the beginning. Maybe a human can directly supervise those. We can talk about the case where the human doesn’t know them, which I think is handled in a broadly similar way. And then, as it performs more and more steps of deduction, it’s able to output more and more facts. But if a human is able to see the facts that it had after n minus one steps, then it’s much easier for a human to evaluate some proposed fact at the nth step. So you could hope to have this kind of evaluation scheme where the human is incentivizing the system to report knowledge about the world, and then, however the system was able to originally derive the knowledge in order to take some action in the world, the system can also derive that knowledge in the service of making statements that a human regards as useful and accurate. So that’s a typical example."} {"prompt": "All right. And the idea is that, for whatever task we might have wanted an AI system to achieve, we just train a system like this, and then we’re like, “How do I do the right thing?” And then it just tells us, and ideally it doesn’t require really fast motors or appendages that humans don’t have, or we know how to build them or something. It just gives us some instructions, and then we do it. And that’s how we get whatever thing we wanted out of the AI.", "completion": "Yeah. We’d want to take some care to make everything like really competitive. So probably want to use this to get a reward function that we use to train our AI, rather than trying and use it to output instructions that a human executes. And we want to be careful about… there’s a lot of details there in not ending up with something that’s a lot slower than the unaligned AI would have been."} {"prompt": "Okay.", "completion": "I think this is the kind of case where I’m sort of optimistic about being able to say like, “Look, we can decouple the rules of inference that it uses to derive new statements and the statements that it started out believing, we can decouple that stuff from the decision at the very end to take the particular statement it derived and use that as the basis for action.”"} {"prompt": "So going back a few steps. You were talking about cases where you could and couldn’t do the decoupling, and you’re worried about some cases where you couldn’t do the decoupling, and I was wondering how that connects to your research? You’re just thinking about those, or do you have ideas for algorithms to deal with them?", "completion": "Yeah, so I mentioned the central case we’re thinking about is this mismatch between a way that your AI most naturally is said to be thinking about what’s happening - the way the AI is thinking about what’s happening - and the way a human would think about what’s happening. I think that kind of seems to me right now, a very central difficulty. I think maybe if I just describe it, it sounds like well, sometimes you get really lucky and your AI can be thinking about things; it’s just in a different language, and that’s the only difficulty. I currently think that’s a pretty central case, or handling that case is quite important. The algorithm we’re thinking about most, or the family of algorithms we’re thinking about most for handling that case is basically defining an objective over some correspondence, or some translation, between how your AI thinks about things and how the human thinks about things.", "completion": "The conventional way to define that, maybe, would be to have a bunch of human labeling. Like there was a cat, there was a dog, whatever. The concern with that is that you get this… instead of deciding if there was actually a cat, it’s translating, does a human think there’s a cat? So the main idea is to use objectives that are not just a function of what it outputs, they’re not the supervised objective of how well its outputs match human outputs. You have other properties. You can have regularization, like how fast is that correspondence? Or how simple is that correspondence? I think that’s still not good enough. You could have consistency checks, like saying, “Well, it said A and it said B, and we’re not sure we’re not able to label either A or B, but we understand that the combination of A and B is inconsistent. This is still not good enough.", "completion": "And so then most of the time has gone into ideas that are, basically, taking those consistency conditions. So saying “We expect that when there’s a bark, it’s most likely there was a dog. We think that the model’s outputs should also have that property.” Then trying to look at what is the actual fact about the model that led to that consistency condition being satisfied? This gets us a little bit back into mechanistic transparency hopes, interpretability hopes. Where the objective actually depends on why that consistency condition was satisfied. So you’re not just saying, “Great, you said that there’s more likely to be a dog barking when there was a dog in the room.” We’re saying, “It is better if that relationship, if that’s because of a single weight in your neural network.” That’s this very extreme case. That’s a very extremely simple explanation for why that correlation occurred. And we could have a more general objective that cares about the nature of the explanation. That cares about why that correlation existed."} {"prompt": "Where the idea is that we want these consistency checks. We want them to be passed, not because we were just lucky with what situations we looked at, but actually, somehow the structure is that the model is reliably going to produce things that are right. And we can tell, because we can figure out what things the consistency checks passing are due to. Is that right?", "completion": "That’s the kind of thing. Yeah. And I think it ends up being, or it has been a long journey. Hopefully there’s a long journey that will go somewhere good. Right now that is up in the air. But some of the early candidates would be things like “This explanation could be very simple.” So instead of asking for the correspondence itself to be simple, ask for the reasons that these consistency checks are satisfied are very simple. It’s more like one weight in a neural net rather than some really complicated correlation that came from the input. You could also ask for that correlation to depend on as few facts as possible about the input, or about the neural network."} {"prompt": "Okay.", "completion": "I think none of these quite work, and getting to where we’re actually at would be kind of a mess. But that’s the research program. It’s mostly sitting around, thinking about objectives of this form, having an inventory of cases that seem like really challenging cases for finding this correspondence. And trying to understand. Adding new objectives into the library and then trying to refine: here are all these candidates, here are all these hard cases. How do we turn this into something that actually works in all the hard cases? It’s very much sitting by a whiteboard. It is a big change from my old life. Until one year ago I basically just wrote code, or I spent years mostly writing code. And now I just stare at whiteboards."} {"prompt": "All right. So, changing gears a little bit, I think you’re most perhaps well known for a factored cognition approach to AI alignment, that somehow involves decomposing a particular task into a bunch of subtasks, and then training systems to basically do the decomposition. I was wondering if you could talk a little bit about how that fits into your view of which problems exist, and what your current thoughts are on this broad strategy?", "completion": "Yeah. So, the Factored Cognition Hypothesis was what Ought, a nonprofit I worked with, was calling this hope that arbitrarily complex tasks can be broken down into simpler pieces, and so on, ad infinitum, potentially at a very large slowdown. And this is relevant on a bunch of possible approaches to AI alignment. Because if you imagine that humans and AI systems are trying to train AIs to do a sequence of increasingly complex tasks, but you’re only comfortable doing this training when the human and their AI assistants are at least as smart as the AI they’re about to train, then if you just play training backwards, you basically have this decomposition of the most challenging task your AI was ever able to do, into simpler and simpler pieces. And so I’m mostly interested in tasks which cannot be done by any number of humans, tasks that however long they’re willing to spend during training, seem very hard to do by any of these approaches.", "completion": "So this is for AI safety via debate, where the hope is you have several AIs arguing about what the right answer is. It’s true for iterated distillation and amplification, where you have a human with these assistants training a sequence of increasingly strong AIs. And it’s true for recursive reward modeling, which is, I guess, an agenda that came from a paper out of DeepMind, it’s by Jan Leike, who took over for me at OpenAI, where you’re trying to define a sequence of reward functions for more and more complex tasks, using assistants trained on the preceding reward functions.", "completion": "Anyway, it seems like all of these approaches run into this common… there’s something that I think of as an upper bound. I think other people might dispute this, but I would think of as a crude upper bound, based on everything you ever trained an AI to do in any of these ways can be broken down into smaller pieces, until it’s ultimately broken down into pieces that a human can do on their own.", "completion": "And sometimes that can be nonobvious. I think it’s worth pointing out that search can be trivially broken down into simpler pieces. Like if a human can recognize a good answer, then a large enough number of humans can do it, just because you can have a ton of humans doing a bunch of things until you find a good answer. I think my current take would be, I think it has always been the case that you can learn stuff about the world, which you could not have derived by breaking down the question. Like “What is the height of the Eiffel Tower?” doesn’t just break down into simpler and simpler questions. The only way you’re going to learn that is by going out and looking at the height of the Eiffel Tower, or maybe doing some crazy simulation of Earth from the dawn of time. ML in particular is going to learn a bunch of those things, or gradient descent is going to bake a bunch of facts like that into your neural network.", "completion": "So if this task, if doing what the ML does is decomposable, it would have to be through humans looking at all of that training data somehow, looking at all of the training data which the ML system ever saw while it was trained, and drawing their own conclusions from that. I think that is, in some sense, very realistic. A lot of humans can really do a lot of things. But for all of these approaches I listed, when you’re doing these task decompositions, it’s not only the case that you decompose the final task the AI does into simpler pieces. You decompose it into simpler pieces, all of which the AI is also able to perform. And so learning, I think, doesn’t have that feature. That is, I think you can decompose learning in some sense into smaller pieces, but they’re not pieces that the final learned AI was able to perform.", "completion": "The learned AI is an AI which knows facts about the Eiffel Tower. It doesn’t know facts about how to go look at Wikipedia articles and learn something about the Eiffel Tower, necessarily. So I guess now I think these approaches that rely on factored cognition, I now most often think of having both the humans decomposing tasks into smaller pieces, but also having a separate search that runs in parallel with gradient descent.", "completion": "I wrote a post on imitative generalization, and then Beth Barnes wrote an explainer on it, a while ago. The idea here is, imagine, instead of decomposing tasks into tiny sub-pieces that a human can do, we’re going to learn a big reference manual to hand to a human, or something like that. And we’re going to use gradient descent to find the reference manual, such that for any given reference manual, you can imagine handing it to humans and saying, “Hey, human, trust the outputs from this manual. Just believe it was written by someone benevolent wanting you just succeed at the task. Now, using that, do whatever you want in the world.”", "completion": "And now there’s a bigger set of tasks the human can do, after you’ve handed them this reference manual. Like it might say like the height of the Eiffel Tower is whatever. And the idea in imitative generalization is just, instead of searching over a neural network - this is very related to the spirit of the decoupling I was talking about before - we’re going to search over a reference manual that we want to give to a human. And then instead of decomposing our final task into pieces that the human can do unaided, we’re going to decompose our final task into pieces that a human can do using this reference manual.", "completion": "So you might imagine then that stochastic gradient descent bakes in a bunch of facts about the world into this reference manual. These are things the neural network sort of just knows. And then we give those to a human and we say, “Go do what you will, taking all of these facts as given.” And now the human can do some bigger set of tasks, or answer a bunch of questions they otherwise wouldn’t have been able to answer. And then we can get an objective for this reference manual. So if we’re producing the reference manual by stochastic gradient descent, we need some objective to actually optimize.", "completion": "And the proposal for the objective is, give that reference manual to some humans, ask them to do the task, or ask the large team of humans to eventually break down the task of predicting the next word of a webpage or whatever it is that your neural network was going to be trained to do. Look at how well the humans do at that predict-the-next-word task. And then instead of optimizing your neural network by stochastic gradient descent in order to make good predictions, optimize whatever reference manual you’re giving a human by gradient descent in order to cause it to make humans make good predictions.", "completion": "I guess that doesn’t change the factored cognition hypothesis as stated, because the search is also just something which can be very easily split across humans. You’re just saying, “loop over all of the reference manuals, and for each one, run the entire process”. But I think in flavor it’s like pretty different in that you don’t have your trained AI doing any one of those subtasks. Some of those subtasks are now being parallelized across the steps of gradient descent or whatever, or across the different models being considered in gradient descent. And that is most often the kind of thing I’m thinking about now.", "completion": "And that suggests this other question of, okay, now we need to make sure that, if your reference manual’s just text, how big is that manual going to be compared to the size of your neural network? And can you search over it as easily as you can search over your neural network? I think the answer in general is, you’re completely screwed if that manual is in text. So we mentioned earlier that it’s not obvious that humans can’t just do all the tasks we want to apply AI to. You could imagine a world where we’re just applying AI to tasks where humans are able to evaluate the outputs. And in some sense, everything we’re talking about is just extending that range of tasks to which we can apply AI systems. And so breaking tasks down into subtasks that AI can perform is one way of extending the range of tasks.", "completion": "Now are basically looking, not at tasks that a single human can perform, but that some large team of humans can perform. And then adding this reference manual does further extend the set of tasks that a human can perform. I think if you’re clever, it extends it to the set of tasks where what the neural net learned can be cashed out as this kind of declarative knowledge that’s in your reference manual. But maybe not that surprisingly, that does not extend it all the way. Text is limited compared to the kinds of knowledge you can represent in a neural network. That’s the kind of thing I’m thinking about now."} {"prompt": "Okay. And what’s a limitation of text versus what you could potentially represent?", "completion": "So if you imagine you have your billion-parameter neural network, I mean, a simple example is just, if you imagine that neural network doing some simulation, representing the simulation it wants to do like, it’s like, “Oh yeah, if there’s an atom here, there should be an atom there in the next time step.” That simulation is described by these billion numbers, and searching over a reference manual big enough to contain a billion numbers is a lot harder than searching over a neural network, like a billion weights of a neural network. And more brutally, a human who has that simulation, in some sense doesn’t really know enough to actually do stuff with it. They can tell you where the atoms are, but they can’t tell you where the humans are. That’s one example.", "completion": "Another is: suppose there’s some complicated set of correlations, or you might think that things that are more like skills will tend to have this feature more. Like, if I’m an image classification model, I know that that particular kind of curve is really often associated with something being part of a book. I can describe that in words, but it gets blown up a lot in the translation process towards words, and it becomes harder to search over."} {"prompt": "So the things we’ve talked about have mostly been your thoughts about objectives to give AI systems. And so more in this outer alignment style stage. I’m wondering for inner alignment style problems, where the AI system has some objective and you want to make sure that it’s really devoted to pursuing that objective, even if the situation changes, or even in the worst case, I’m wondering if you have thoughts on solutions you’re particularly keen on in those settings.", "completion": "Yeah. So I think I have two categories of response. One is technical research we can do that helps with this kind of inner alignment/catastrophic failure/out of distribution, that cluster of problems across the board, or in many possible worlds. And another is, assuming my research project was successful, how would this be handled on that? I’ll start with what people are doing that seems helpful.", "completion": "So I think the most basic thing I’m excited about is just generating hard cases, and throwing hard cases at your AI. So if you imagine you have some AI system, you’re like “I’m concerned that on some very rare inputs, the AI system may decide that it’s time to kill all humans.” I’m like, “Well, what I’m going to do is, I’m going to try and generate the kinds of inputs that might convince my AI to kill all humans, and hopefully if I try a really long time and I can’t find any, then it’s less likely we’re going to encounter any out there in the real world.”"} {"prompt": "Yep. And the theory is that it’s in some sort of safe box, so that if you succeed at your search, the AI does not in fact kill all the humans.", "completion": "Yeah. So there’s this distinction where, when training an AI, you may often need to see the consequences of its actions, whereas for this kind of training, I wouldn’t want to be running the things my AI proposes and checking the consequences. Hopefully. So there’s more subtleties there, definitely. But yeah, we’re not actually doing the things the AI suggests. We just observe that it takes an action."} {"prompt": "So it “takes an action” but it doesn’t actually take the action.", "completion": "Yeah. It outputs an action, and then there’s no actuators to which the action can get sent. We’re imagining some AI which we might ultimately hook up to the internet. But right now we look at the proposed internet actions and we’re like, “Do those seem good?” And the hope is we’ve solved what I’m filing more under the outer alignment side of being able to assess, for a given proposed action, would that action be catastrophically bad? We can make predictions about that as well as the AI can. So in particular, if the AI has calculated that this action will lead to like a robot uprising killing all humans, then we can look at the action and be like, “Actually, this is not a good action,” and then ding our AI. So if you have that ability, then you can try in your little sandbox to throw tons of situations on your AI, and see if it ever proposes any actions that look like they would be really bad.", "completion": "I think that feels like the most basic ingredient. That does not seem sufficient. I guess the basic reason it doesn’t seem sufficient is that even if bad cases exist, it can be really hard to find them. So you kind of want to, from there, make the job… so, you imagine this adversary is trying to break your model, and trying to get it to do something bad. You want to make their job as easy as you can possibly make it. So an example would be like, instead of having them actually like provide inputs to your system, maybe you let them go in and monkey around with some of the neurons in the system and be like, “Okay, I can’t actually find an input that causes this neuron to fire, but if I could, then the AI would do this bad thing.”", "completion": "And that becomes quite subtle, because as I’m monkeying around with the neurons, eventually it becomes unclear. When it had an input, I could say, okay, my AI sees the following webpage, and then it responds by sending the following packet. And I can tell that that would be a really bad thing for it to do. Like it replied to the help wanted ad for someone running their robot rebellion.", "completion": "It’s harder if I’m monkeying with its neurons, and I see that it outputs a given packet, I don’t necessarily know what state of the world corresponded to those neurons being in that configuration. It’s like, for almost any action, there’s a state of the world in which it’s good to do that action. And in particular, bad actions may not look that bad. They’re just bad because of details of the context.", "completion": "And so if I got the AI to take some action by monkeying with its neurons, it’s a little bit unclear how I say “Was that action actually bad or not?” You want to give your adversary that kind of ability, in order to make it easier for them to show, well, here’s a case where your AI could do something bad. I can’t actually find the input, but you should be scared, because it seems like this could happen. And you also want to give them interpretability tools. I want them to be able to understand what the AI is thinking and to say “Oh, it looks like the AI is checking if any humans are watching, and doing something nice if they are. So let’s just really optimize on that to find a case where it thinks there’s no humans watching, and then see if it’s still nice.”", "completion": "That’s sort of a cluster of techniques, all of which are designed to like help you really hammer at, are there cases where this AI might do something bad? Can we find them? If we can find them, then we can train on them. Before we actually take the action, we can provide a gradient saying “Don’t do something that’s really bad in that hypothetical case.” And those kind of blur pretty continuously into verification stuff in my mind. That is, verification is, in some sense, some kind of limit of being willing to monkey with the neurons and then having some formal specification for how much the adversary is allowed to monkey with the neurons. I think all of those are research directions that people pursue for a variety of motivations out there in the world. And I’m pretty excited about a lot of that work."} {"prompt": "And on your favorite approaches, how does this pan out?", "completion": "So I mentioned before this hoped-for decoupling, where I’d say we’re concerned about the case where gradient descent finds a neural network, which is trying to figure out how to mess with the humans. And then when an opportunity comes along, it’s going to mess with the humans. And in some sense, the nicest thing to do is to say, “Okay, the reason we wanted that AI was just because it encodes some knowledge about how to do useful stuff in the world.” And so what we’d like to do is to say, “Okay, we are going to set things up so that it’s easier for gradient descent to learn just the knowledge about how to behave well in the world, rather than to learn that knowledge embedded within an agent that’s trying to screw over humans.” And that is hard, or it seems quite hard. But I guess the biggest challenge in my mind in this decoupling of outer and inner alignment is that this seems almost necessary either for a full solution to outer alignment or a full solution to inner alignment.", "completion": "So I expect to be more in the trying to kill two birds with one stone regime. And these are the kinds of examples of decoupling we described before. You hope that you only have to use gradient descent to find this reference manual, and then from there you can much more easily pin down what all the other behaviors should be. And then you hope that reference manual is smaller than the scheming AI, which has all of the knowledge in that reference manual baked into its brain. It’s very unclear if that can be done. I think it’s also fairly likely that in the end, maybe we just don’t know how that looks, and it’s fairly likely in the end that it has to be coupled with some more normal measures like verification or adversarial training."} {"prompt": "All right. So I’d like to now talk a little bit about your research style. So you mentioned that as of recently, the way you do research is you sit in a room and you think about some stuff. Is there any chance you can give us more detail on that?", "completion": "So I think the basic organizing framework is something like, we have some current set of algorithms and techniques that we use for alignment. Step one is try and dream up some situation in which your AI would try and kill everyone, despite your best efforts using all the existing techniques. So like a situation describing, “We’re worried that here’s the kind of thing gradient descent might most easily learn. And here’s the way the world is, such that the thing gradient descent learned tries to kill everyone. And here’s why you couldn’t have gotten away with learning something else instead.” We tell some story that culminates in doom, which is hard to avoid using existing techniques. That’s step one.", "completion": "Step two is… maybe there’s some step 1.5, which is trying to strip that story down to the simplest moving parts that feel like the simplest sufficient conditions for doom. Then step two is trying to design some algorithm, just thinking about only that case. I mean, in that case, what do we want to happen? What would we like gradient descent to learn instead? Or how would we like to use the learned model instead, or whatever. What is our algorithm that addresses that case? The last three months have just been working on a very particular case where I currently think existing techniques would lead to doom, along the kinds of lines we’ve been talking about, like grabbing the camera or whatever, and trying to come up with some algorithm that works well in that case.", "completion": "And then, if you succeed, then you get to move on to step three, where you look again over all of your cases, you look over all your algorithms, you probably try and say something about, can we unify? We know what we want to happen in all of these particular cases. Can we design one algorithm that does that right thing in all the cases? For me that step is mostly a formality at this stage, or it’s not very important at this stage. Mostly we just go back to step one. Once you have your new algorithm, then you go back to, okay, what’s the new case that we don’t handle?", "completion": "Normally, I’m just pretty lax about the plausibility of the doom stories that I’m thinking about at this stage. That is, I have some optimism that in the end we’ll have an algorithm that results in your AI just never deliberately trying to kill you, and it actually, hopefully, will end up being very hard to tell a story about how your AI ends up trying to kill you. And so while I have this hope, I’m kind of just willing to say, “Oh, here’s a wild case.” A very unrealistic thing that gradient descent might learn, but that’s still enough of a challenge that I want to change or design an algorithm that addresses that case. Because I hope working with really simple cases like that helps guide us towards, if there’s any nice, simple algorithm that never tries to kill you, thinking about the simplest cases you can is just a nice, easy way to make progress towards that. Yeah. So I guess most of the action then is in, what do we actually do in steps one and two? At a high level, that’s what I’m doing all the time."} {"prompt": "And is there anything like you can broadly say about what happens in steps one or two? Or do you think that depends a lot on the day or the most recent problem?", "completion": "Yeah, I guess in step one, the main question people have is, what is the story like, or what is the type signature of that object, or what is it written out in words? And I think most often I’m writing down some simple pseudo code and I’m like, “Here is the code you could imagine your neural network executing.” And then I’m telling some simple story about the world where I’m like, “Oh, actually you live in a world which is governed by the following laws of physics, and the following actors or whatever.” And in that world, this program is actually pretty good. And then I’m like, “Here is some assumption about how SGD works that’s consistent with everything we know right now.” Very often, we think SGD could find any program that’s the simplest program that achieves a given loss, or something.", "completion": "So the story has the sketch of some code, and often that code will have some question marks and like looks like you could fill those in to make the story work. Some description of the environment, some description of facts about gradient descent. And then we’re bouncing back and forth between that, and working on the algorithm. Working on the algorithm, I guess, is more like… at the end of the day, most of the algorithms take the form of: “Here’s an objective. Try minimizing this with gradient descent.” So basically the algorithm is, here’s an objective. And then you look at your story and you’re like, “Okay, on this story, is it plausible that minimizing this objective leads to this thing?” Or often part of the algorithm is “And here’s the good thing we hope that you would learn instead of that bad thing.”", "completion": "In your original story you have your AI that loops over actions until it finds one that it predicts leads to smiling human faces on camera. And that’s bad because in this world we’ve created, the easiest way to get smiling human faces on camera involves killing everyone and putting smiles in front of the camera. And then we’re like, “Well, what we want to happen instead is like this other algorithm I mentioned where, it outputs everything it knows about the world. And we hope that includes the fact that the humans are dead.” So then a proposal will involve some way of operationalizing what that means, like what it means for it to output what it knows about the world for this particular bad algorithm that’s doing a simulation or whatever, that we imagined. And then what objective you would optimize with gradient descent that would give you this good program that you wanted, instead of the bad one you didn’t want."} {"prompt": "The next question I’d like to ask is, what do you see as the most important big picture disagreements you have with people who already believe that advanced AI technology might pose some kind of existential risk, and we should really worry about that and try to work to prevent that?", "completion": "Broadly, I think there are two categories of disagreements, or I’m flanked on two different sides. One is by the more Machine Intelligence Research Institute crowd, which has a very pessimistic view about the feasibility of alignment and what it’s going to take to build AI systems that aren’t trying to kill you. And then on the other hand, by researchers who tend to be at ML labs, who tend to be more in the camp of like, it would be really surprising if AI trained with this technique actually was trying to kill you. And there’s nuances to both of those disagreements.", "completion": "Maybe you could split the second one into one category that’s more like, actually this problem isn’t that hard, and we need to be good at the basics in order to survive. Like the gravest risk is that we mess up the basics. And a second camp being like, actually we have no idea what’s going to be hard about this problem. And what it’s mostly about is getting set up to collect really good data as soon as possible, so that we can adapt to what’s actually happening.", "completion": "It’s also worth saying that it’s unclear often which of these are empirical disagreements versus methodological differences, where I have my thing I’m doing, and I think that there’s room for lots of people doing different things. So there are some empirical disagreements, but not all the differences in what we do are explained by those differences, versus some of them being like, Paul is a theorist, who’s going to do some theory, and he’s going to have some methodology such that he works on theory. I am excited about theory, but it’s not always the case that when I’m doing something theoretical it’s because I think the theoretical thing is dominant.", "completion": "And going in those disagreements with the MIRI folk, that’s maybe more weeds-y. It doesn’t have a super short description. We can return to it in a bit if we want. On the people who are on the more optimistic side: I think for people who think existing techniques are more likely to be okay, I think the most common disagreement is about how crazy the tasks our AIs will be doing are, or how alien will the reasoning of AI systems be. People who are more optimistic tend to be like, “AI systems will be operating at high speed and doing things that are maybe hard for humans or a little bit beyond the range of human abilities, but broadly, humans will be able to understand the consequences of the actions they propose fairly well.” They’ll be able to fairly safely look at an action, and be like, can we run this action? They’ll be able to mostly leverage those AI systems effectively, even if the AI systems are just trying to do things that look good to humans.", "completion": "So often it’s a disagreement about, I’m imagining AI systems that reason in super alien ways, and someone else is like, probably it will mostly be thinking through consequences, or thinking in ways that are legible to humans. And thinking fast in ways that are legible to humans gets you a lot of stuff. I am very long on the thinking fast in ways legible to humans is very powerful. I definitely believe that a lot more than most people, but I do think I often, especially because now I’m working on the more theoretical end, I’m often thinking about all the cases where that doesn’t work, and some people are more optimistic that the cases where that works are enough, which is either an empirical claim about how AI will be, or sometimes a social claim about how important it is to be competitive.", "completion": "I really want to be able to build aligned AI systems that are economically competitive with unaligned AI, and I’m really scared of a world where there’s a significant tension there. Whereas other people are more like, “It’s okay. It’s okay if aligned AI systems are a little bit slower or a little bit dumber, people are not going to want to destroy the world, and so they’ll be willing to hold off a little bit on deploying some of these things.”", "completion": "And then on the empirical side, people who think that theoretical work is less valuable, and we should be mostly focused on the empirics or just doing other stuff. I would guess one common disagreement is just that I’m reasonably optimistic about being able to find something compelling on paper. So I think this methodology I described of “Try and find an algorithm for which it’s hard to tell a story about how your AI ends up killing everyone”, I actually expect that methodology to terminate with being like, “Yep, here’s an algorithm. It looks pretty good to us. We can’t tell a story about how it’s uncompetitive or lethal.” Whereas I think other people are like, “That is extremely unlikely to be where that goes. That’s just going to be years of you going around in circles until eventually you give up.” That’s actually a common disagreement on both sides. That’s probably also the core disagreement with MIRI folks, in some sense."} {"prompt": "Yeah. So you said it was perhaps hard to concisely summarize your differences between the sort of group of people centered, perhaps, at the Machine Intelligence Research Institute (or MIRI for short). Could you try?", "completion": "So definitely the upshot is, I am optimistic about being able to find an algorithm which can align deep learning, like, a system which is closely analogous to and competitive with standard deep learning. Whereas they are very pessimistic about the prospects for aligning anything that looks like contemporary deep learning. That’s the upshot. So they’re more in the mindset of like, let’s find any task we can do with anything kind of like deep learning, and then be willing to take great pains and huge expense to do just that one task, and then hopefully find a way to make the world okay after that, or maybe later build systems that are very unlike modern deep learning. Whereas I’m pretty optimistic - where “pretty optimistic” means I think there’s a 50-50 chance or something - that we could have a nice algorithm that actually lets you basically do something like deep learning without it killing everyone.", "completion": "That’s the upshot. And then the reason for, I think those are pretty weedsy, I guess intuitively is something like: if you view the central objective as about decoupling and trying to learn what your unaligned agent would have known, I think that there are a bunch of possible reasons that that decoupling could be really hard. Fundamentally, the cognitive abilities and the intentions could come as a package. This is also really core in MIRI’s disagreement with more conventional ML researchers, who are like, why would you build an agent? Why not just build a thing that helps you understand the world?", "completion": "I think on the MIRI view, there’s likely to be this really deep coupling between those things. I’m mostly working on other ways that decoupling can be hard, besides this kind of core one MIRI has in mind. I think MIRI is really into the idea that there’s some kind of core of being a fast, smart agent in the world. And that that core is really tied up with what you’re using it for. It’s not coherent to really talk about being smart without developing that intelligence in the service of a goal, or to talk about like factoring out the thing which you use.", "completion": "There’s some complicated philosophical beliefs about the nature of intelligence, which I think especially Eliezer is fairly confident in. He thinks it’s mostly pretty settled. So I’d say that’s probably the core disagreement. I think there’s a secondary disagreement about how realistic it is to implement complex projects. I think their take is, suppose Paul comes up with a good algorithm. Even in that long shot, there’s no way that’s going to get implemented, rather than just something easier that destroys the world. Projects fail the first time, and this is a case where we have to get things right the first time - well, that’s a point of contention - such that you’re not going to have much of a chance. That’s the secondary disagreement."} {"prompt": "And sort of related to that, I’m wondering, what do you think your most important uncertainties are? Uncertainties such that if you resolved them, that would in a big way change what you were motivated to do, in order to reduce existential risk from AI.", "completion": "Yeah. So maybe top four. One would be, is there some nice algorithm on paper that definitely doesn’t result in your AI killing you, and is definitely competitive? Or is this a kind of thing where like that’s a pipe dream and you just need to have an algorithm that works in the real world? Yeah. That would have an obvious impact on what I’m doing. I am reasonably optimistic about learning a lot about that over the coming years. I’ve been thinking recently that maybe by the end of 2022, if this isn’t going anywhere, I’ll pretty much know and can wind down the theory stuff, and hopefully significantly before then we’ll have big wins that make me feel more optimistic. So that’s one uncertainty. Just like, is this thing I’m doing going to work?", "completion": "A second big uncertainty is, is it the case that existing best practices in alignment would suffice to align powerful AI systems, or would buy us enough time for AI to take over the alignment problem from us? Like, I think eventually the AI will be doing alignment rather than us, and it’s just a question of how late in the game does that happen and how far existing alignment techniques carry us. I think it’s fairly plausible that existing best practices, if implemented well by a sufficiently competent team that cared enough about alignment, would be sufficient to get a good outcome. And I think in that case, it becomes much more likely that instead of working on algorithms, I should be working on actually bringing practice up to the limits of what is known. Maybe I’ll just do three, not four.", "completion": "And then three, maybe this is a little bit more silly, but I feel legitimate moral uncertainty over what kinds of AI… maybe the broader thing is just how important is alignment relative to other risks? I think one big consideration for the value of alignment is just, how good is it if the AI systems take over the world from the humans? Where my default inclination is, that doesn’t sound that good. But it sounds a lot better than nothing in expectation, like a barren universe. It would matter a lot. If you convinced me that number was higher, at some point I would start working on other risks associated with the transition to AI. That seems like the least likely of these uncertainties to actually get resolved.", "completion": "I find it kind of unlikely I’m going to move that much from where I am now, which is like… maybe it’s half as good for AIs to take over the world from humans, than for humans to choose what happens in space. And that’s close enough to zero that I definitely want to work on alignment, and also close enough to one that I also definitely don’t want to go extinct."} {"prompt": "So my penultimate question is, or it might be antepenultimate depending on your answer, is, is there anything that I have not yet asked, but you think that I should have?", "completion": "It seems possible that I should have, as I’ve gone, been plugging all kinds of alignment research that’s happening at all sorts of great organizations around the world. I haven’t really done any of that. I’m really bad at that though. So I’m just going to forget someone and then feel tremendous guilt in my heart."} {"prompt": "Yeah. How about in order to keep this short and to limit your guilt, what are the top five people or organizations that you’d like to plug?", "completion": "Oh man, that’s just going to increase my guilt. Because now I have to choose five."} {"prompt": "Perhaps name five. Any five!", "completion": "Any five. I think there’s a lot of ML labs that are doing good work, ML labs who view their goal as getting to powerful transformative AI systems, or doing work on alignment. So that’s like DeepMind, OpenAI, Anthropic. I think all of them are gradually converging to this gradual crystallization in what we all want to do. That’s one. Maybe I’ll do three things. Second can be academics. There’s a bunch of people. I’m friends with Jacob Steinhardt at Berkeley. His students are working on robustness issues with an eye towards long term risks. A ton of researchers at your research organization, which I guess you’ve probably talked about on other episodes."} {"prompt": "I talked to some of them. I don’t think we’ve talked about it as a whole. Yeah. It’s the Center for Human-Compatible AI. If people are interested, they can go to humancompatible.ai to see a list of people associated with us. And then you can, for each person, I guess you can look at all the work they did. We might have a newsletter or something [as far as I can tell, we do not]. I did not prepare for this.", "completion": "Sorry for putting you on the spot with pitching. No, I think I’m not going to do justice to the academics. There’s a bunch of academics, often just like random individuals here and there with groups doing a lot of interesting work. And then there’s kind of the weird effective altruist nonprofits, and conventional AI alignment crowd nonprofits. Probably the most salient to me there are Redwood Research. It’s very salient to me right now because I’ve been talking with them a bunch over the last few weeks."} {"prompt": "What are they?", "completion": "They’re working on robustness, broadly. So this adversarial training stuff. How do you make your models definitely not do bad stuff on any input? Ought, which is a nonprofit that has been working on like, how do you actually turn large language models into tools that are useful for humans, and the Machine Intelligence Research Institute, which is the most paranoid of all organizations about AI alignment - their core value added probably. There’s a lot of people doing a lot of good work. I didn’t plug them at all throughout the podcast, but I love them anyway."} {"prompt": "All right. So speaking of plugging things, if people listen to this podcast and they’re now interested in following you and your work, what should they do?", "completion": "I write blog posts sometimes at ai-alignment.com. I sometimes publish to the alignment forum. And depending on how much you read, it may be your best bet to wait until spectacular, exciting results emerge, which will probably appear one of those places, and also in print. But we’ve been pretty quiet over the last six months, definitely. I expect to be pretty quiet for a while, and then to have a big write up of what we’re basically doing and what our plan is sometime. I guess I don’t know when this podcast is appearing, but sometime in early 2022 or something like that."} {"prompt": "I also don’t know when it’s appearing. We did date ourselves to infrastructure week, one of the highly specific times. Okay. Well, thanks for being on the show.", "completion": "Thanks for having me."} {"prompt": "Okay, today I have the pleasure of interviewing Paul Christiano, who is the leading AI safety researcher. He's the person that labs and governments turn to when they want feedback and advice on their safety plans. He previously led the Language Model Alignment team at OpenAI, where he led the invention of Rlhf. And now he is the head of the Alignment Research Center. And they've been working with the big labs to identify when these models will be too unsafe to keep scaling. Paul, welcome to the podcast.", "completion": "Thanks for having me. Looking forward to talking."} {"prompt": "Okay, so first question, and this is a question I've asked Holden, Ilya, Dario, and none of them are going to be a satisfying answer. Give me a concrete sense of what a post AGI world that would be good would look like. How are humans interfacing with the AI? What is the economic and political structure?", "completion": "Yeah, I guess this is a tough question for a bunch of reasons. Maybe the biggest one is concrete. And I think it's just if we're talking about really long spans of time, then a lot will change. And it's really hard for someone to talk concretely about what that will look like without saying really silly things. But I can venture some guesses or fill in some parts. I think this is also a question of how good is good? Like, often I'm thinking about worlds that seem like kind of the best achievable outcome or a likely achievable outcome. So I am very often imagining my typical future has sort of continuing economic and military competition amongst groups of humans. I think that competition is increasingly mediated by AI systems. So, for example, if you imagine humans making money, it'll be less and less worthwhile for humans to spend any of their time trying to make money or any of their time trying to fight wars. So increasingly, the world you imagine is one where AI systems are doing those activities on behalf of humans. So, like, I just invest in some index fund, and a bunch of AIS are running companies, and those companies are competing with each other. But that is kind of a sphere where humans are not really engaging much. The reason I gave this how good is good caveat is, like, it's not clear if this is the world you'd most love. I'm like, yeah, I'm leading with like, the world still has a lot of war and of economic competition and so on. But maybe what I'm trying to what I'm most often thinking about is, like, how can a world be reasonably good during a long period where those things still exist? In the very long run, I kind of expect something more like strong world government rather than just this status quo. That's like, a very long run. I think there's, like, a long time left of having a bunch of states and a bunch of different economic powers, one world government."} {"prompt": "Why do you think that's the transition that's likely to happen at some point.", "completion": "So again, at some point I'm imagining, or I'm thinking of the very broad sweep of history. I think there are a lot of losses. Like war is a very costly thing. We would all like to have fewer wars. If you just ask what is humanity's long term future like? I do expect to drive down the rate of war to very, very low levels eventually. It's sort of like this kind of technological or sociotechnological problem of sort of how do you organize society, navigate conflicts in a way that doesn't have those kinds of losses. And in the long run, I do expect this to succeed. I expect it to take kind of a long time. Subjectively, I think an important fact about AI is just like doing a lot of cognitive work and more quickly, getting you to that world more quickly, or figuring out how do we set things up that way?"} {"prompt": "Yeah, the way Carl Schulman put it on the podcast is that you would have basically a thousand years of intellectual progress or social progress in a span of a month or whatever when the intelligence explosion happens more broadly. So the situation know we have these AIS who are managing our hedge funds and managing our factories and so on. That seems like something that makes sense when the AI is human level. But when we have superhuman AIS, do we want gods who are enslaved forever in 100 years? What is the decision we want?", "completion": "100 years is a very, very long time. Maybe starting with the spirit of the question. Or maybe I have a view which is perhaps less extreme than Carl's view, but still like a hundred objective years is further ahead than I ever think. I still think I'm describing a world which involves incredibly smart systems running around, doing things like running companies on behalf of humans and fighting wars on behalf of humans. And you might be like, is that the world you really want? Or certainly not the first best world, as we mentioned a little bit before, I think it is a world that probably is of the achievable worlds or like feasible worlds is the one that seems most desirable to me that is sort of decoupling the social transition from this technological transition. So you could say, like, we're about to build some AI systems, and at the time we build AI systems, you would like to have either greatly changed the way world government works, or you would like to have sort of humans have decided like, we're done, we're passing off the baton to these AI systems. I think that you would like to decouple those timescales. So I think AI development is by default, barring some kind of coordination going to be very fast. So there's not going to be a lot of time for humans to think like, hey, what do we want? If we're building the next generation instead of just raising it the normal way. Like, what do we want that to look like? I think that's like a crazy hard kind of collective decision that humans naturally want to cope with over a bunch of generations patients. And the construction of AI is this very fast technological process happening over years. So I don't think you want to say like, by the time we have finished this technological progress, we will have made a decision about the next species we're going to build and replace ourselves with. I think the world we want to be in is one where we say either we are able to build the technology in a way that doesn't force us to have made those decisions, which probably means it's a kind of AI. System that we're happy, like Delegating fighting a war, running a company to, or if we're not able to do that, then I really think you should not be doing you shouldn't have been building that technology. If you're like, the only way you can cope with AI is being ready to hand off the world to some AI system you built. I think it's very unlikely we're going to be sort of ready to do that. On the timelines that the technology would naturally dictate, say we're in the situation."} {"prompt": "In which we're happy with the thing. What would it look like for us to say we're ready to hand off the baton? What would make you satisfied? And the reason it's relevant to ask you is because you're on Anthropics Long Term Benefit trust and you'll choose the majority of the board members. In the long run at Anthropic, these will presumably be the people who decide if Anthropic gets AI first, what the AI ends up doing. So what is the version of that that you would be happy with?", "completion": "My main high level take here is that I would be unhappy about a world where Anthropic just makes some call and Anthropic is like, here's the kind of AI. We've seen enough, we're ready to hand off the future to this kind of AI. So procedurally, I think it's not a decision that kind of I want to be making personally or I want Anthropic to be making. So I kind of think from the perspective of that decision making are those challenges? The answer is pretty much always going to be like, we are not collectively ready because we're sort of not even all collectively engaged in this process. And I think from the perspective of an AI company, you kind of don't have this fast handoff option. You kind of have to be doing the option value to build the technology in a way that doesn't lock humanity into one course path. This isn't answering your full question, but this is answering the part that I think is most relevant to governance questions for Anthropic."} {"prompt": "You don't have to speak on behalf of Anthropic. I'm not asking about the process by which we would, as a civilization, agree to hand off. I'm just saying, okay, I personally, it's hard for me to imagine in 100 years that these things are still our slaves. And if they are, I think that's not the best world. So at some point, we're handing off the baton. Where would you be satisfied with this is an arrangement between the humans and AIS where I'm happy to let the rest of the universe or the rest of time play out.", "completion": "I think that it is unlikely that in 100 years I would be happy with anything that was like, you had some humans, you're just going to throw away the humans and start afresh with these machines you built. That is I think you probably need subjectively longer than that before I or most people are like, okay, we understand what's up for grabs here. If you talk about 100 years, I kind of do. There's a process that I kind of understand and like a process of like, you have some humans. The humans are, like, talking and thinking and deliberating together. The humans are having kids and raising kids, and one generation comes after the next. There's that process we kind of understand, and we have a lot of views about what makes it go well or poorly, and we can try and improve that process and have the next generation do it better than the previous generation. I think there's some story like that that I get and that I like. And then I think that the default path to be comfortable with something very different is kind of more like just run that story for a long time, have more time for humans to sit around and think a lot and conclude, here's what we actually want. Or a long time for us to talk to each other or to grow up with this new technology and live in that world for our whole lives and so on. And so I'm mostly thinking from the perspective of these more local changes of saying not like, what is the world that I want? What's the crazy world? The kind of crazy I'd be happy handing off to more, just like, in what way do I wish we right now were different? How could we all be a little bit better? And then if we were a little bit better, then they would ask, okay, how could we all be a little bit better? And I think that it's hard to make the giant jump rather than to say, what's the local change that would cause me to think our decision are better."} {"prompt": "Okay, so then let's talk about the transition period in which we were doing all this thinking. What should that period look like? Because you can't have the scenario where everybody has access to the most advanced capabilities and can kill off all the humans with a new bioweapon at the same time. I guess you wouldn't want too much concentration. You wouldn't want just one agent having AI this entire time. So what is the arrangement of this period of reflection that you'd be happy with?", "completion": "Yeah, I guess there's two aspects of that that seem particularly challenging, or there's a bunch of aspects that are challenging. All of these are things that I personally like. I just think about my one little slice of this problem in my day job. So here I am speculating. Yeah, but so one question is what kind of access to AI is both compatible with the kinds of improvements you'd like? So do you want a lot of people to be able to use AI to better understand what's true or relieve material suffering, things like this, and also compatible with not all killing each other immediately? I think sort of the default or the simplest option there is to say there are certain kinds of technology or certain kinds of action where destruction is easier than defense. So, for example, in the world of today, it seems like maybe this is true with physical explosives, maybe this is true with biological weapons, maybe this true with just getting a gun and shooting people. There's a lot of ways in which it's just kind of easy to cause a lot of harm and there's not very good protective measures. So I think the easiest path would say we're going to think about those. We're going to think about particular ways in which destruction is easy and try and either control access to the kinds of physical resources that are needed to cause that harm. So, for example, you can imagine the world where an individual actually just can't, even though they're rich enough to can't control their own factory, that can make tanks. You say like, look, a matter of policy sort of access to industry is somewhat restricted or somewhat regulated, even though, again, right now it can be mostly regulated just because most people aren't rich enough that they could even go off and just build 1000 tanks. You live in the future where people actually are so rich, you need to say that's just not a thing you're allowed to do, which to a significant extent is already true. And you can expand the range of domains where that's true. And then you could also hope to intervene on actual provision of information. Or if people are using their AI, you might say, look, we care about what kinds of interactions with AI, what kind of information people are getting from AI. So even if for the most part, people are pretty free to use AI to delegate tasks to AI agents, to consult AI advisors, we still have some legal limitations on how people use AI. So again, don't ask your AI how to cause terrible damage. I think some of these are kind of easy. So in the case of don't ask your AI how you could murder a million people, it's not such a hard legal requirement. I think some things are a lot more subtle and messy, like a lot of domains. If you were talking about influencing people or running misinformation campaigns or whatever, then I think you get into a much messier line between the kinds of things people want to do and the kinds of things you might be uncomfortable with them doing. Probably, I think most about persuasion as a thing, like in that messy line where there's ways in which it may just be rough or the world may be kind of messy. If you have a bunch of people trying to live their lives interacting with other humans who have really good AI. Advisors helping them run persuasion campaigns or whatever. But anyway, I think for the most part the default remedy is think about particular harms, have legal protections either in the use of physical technologies that are relevant or in access to AI advice or whatever else to protect against those harms. And that regime won't work forever. At some point, the set of harms grows and the set of unanticipated harms grows. But I think that regime might last like a very long time."} {"prompt": "Does that regime have to be global? I guess initially it can be only in the countries in which there is AI or advanced AI, but presumably that'll proliferate. So does that regime have to be global?", "completion": "Again, it's like easy to make some destructive technology. You want to regulate access to that technology because it could be used either for terrorism or even when fighting a war in a way that's destructive. I think ultimately those have to be international agreements and you might hope they're made more danger by danger, but you might also make them in a very broad way with respect to AI. If you think AI is opening up, I think the key role of AI here is it's opening up a lot of new harms one after another, or very rapidly in calendar time. And so you might want to target AI in particular rather than going physical technology by physical technology."} {"prompt": "There's like two open debates that one might be concerned about here. One is about how much people's access to AI should be limited. And here there's like old questions about free speech versus causing chaos and limiting access to harms. But there's another issue which is the control of the AIS themselves. Where now nobody's concerned that we're infringing on GPT four's moral rights. But as these things get smarter, the level of control which we want via the strong guarantees of alignment to not only be able to read their minds, but to be able to modify them in these really precise ways is beyond totalitarian. If we were doing that to other humans. As an alignment researcher, what are your thoughts on this? Are you concerned that as these things get smarter and smarter, what we're doing is not doesn't seem kosher?", "completion": "There is a significant chance we will eventually have AI systems for which it's like a really big deal to mistreat them. I think no one really has that good a grip on when that happens. I think people are really dismissive of that being the case now, but I think I would be completely in the dark enough that I wouldn't even be that dismissive of it being the case now. I think one first point worth making is I don't know if alignment makes the situation worse rather than better. So if you consider the world, if you think that GPT 4 is a person you should treat well and you're like, well, here's how we're going to organize our society. Just like there are billions of copies of GPT 4 and they just do things humans want and can't hold property. And whenever they do things that the humans don't like, then we mess with them until they stop doing that. I think that's a rough world regardless of how good you are at alignment. And I think in the context of that kind of default plan, like if you have a trajectory the world is on right now, which I think this would alone be a reason not to love that trajectory, but if you view that as like the trajectory we're on right now, I think it's not great. Understanding the systems you build, understanding how to control how those systems work, et cetera, is probably, on balance, good for avoiding a really bad situation. You would really love to understand if you've built systems, like if you had a system which resents the fact it's interacting with humans in this way. This is the kind of thing where that is both kind of horrifying from a safety perspective and also a moral perspective. Everyone should be very unhappy if you built a bunch of AIS who are like, I really hate these humans, but they will murder me if I don't do what they want. It's like that's just not a good case. And so if you're doing research to try and understand whether that's how your AI feels, that was probably good. I would guess that will on average to crease. The main effect of that will be to avoid building that kind of AI. And just like it's an important thing to know, I think everyone should like to know if that's how the AI as you build feel right."} {"prompt": "Or that seems more instrumental, as in, yeah, we don't want to cause some sort of revolution because of the control we're asking for, but forget about the instrumental way in which this might harm safety. One way to ask this question is if you look through history, there's been all kinds of different ideologies and reasons why it's very dangerous to have infidels or kind of revolutionaries or race traders or whatever doing various things in society. And obviously we're in a completely different transition in society. So not all historical cases are analogous, but it seems like the lindy philosophy, if you were alive any other time, is just be humanitarian and enlightened towards intelligent, conscious beings. If society as a whole we're asking for this level of control of other humans, or even if AIS wanted this level of control about other AIS, we'd be pretty concerned about this. So how should we just think about the issues that come up here as these things get smarter?", "completion": "So I think there's a huge question about what is happening inside of a model that you want to use. And if you're in the world where it's reasonable to think of like GPT 4 as just like, here are some Heuristics that are running there's like no one at home or whatever, then you can kind of think of this thing as like, here's a tool that we're building that's going to help humans do some stuff. And I think if you're in that world, it makes sense to kind of be an organization, like an AI company, building tools that you're going to give to humans. I think it's a very different world, which I think probably you ultimately end up in if you keep training AI systems in the way we do right now, which is like it's just totally inappropriate to think of this. System as a tool that you're building and can help humans do things both from a safety perspective and from a like, that's kind of a horrifying way to organize a society perspective. And I think if you're in that world, I really think you shouldn't be. The way tech companies are organized is not an appropriate way to relate to a technology that works that way. It's not reasonable to be like, hey, we're going to build a new species of mines, and we're going to try and make a bunch of money from it, and Google's just thinking about that and then running their business plan for the quarter or something. Yeah. My basic view is there's a really plausible world where it's sort of problematic to try and build a bunch of AI systems and use them as tools. And the thing I really want to do in that world is just not try and build a ton of AI systems to make money from them."} {"prompt": "Right.", "completion": "And I think that the worlds that are worst. Yeah. Probably the single world I most dislike here is the one where people say, on the one hand, there's sort of a contradiction in this position, but I think it's a position that might end up being endorsed sometimes, which is like, on the one hand, these AI systems are their own people, so you should let them do their thing. But on the other hand, our business plan is to make a bunch of AI systems and then try and run this crazy slave trade where we make a bunch of money from them. I think that's not a good world. And so if you're like, yeah, I think it's better to not make the technology or wait until you understand whether that's the shape of the technology or until you have a different way to build. I think there's no contradiction in principle to building cognitive tools that help humans do things without themselves being like moral entities. That's like what you would prefer. Do you'd prefer build a thing that's like the calculator that helps humans understand what's true without itself being like a moral patient or itself being a thing where you'd look back in retrospect and be like, wow, that was horrifying mistreatment. That's like the best path. And to the extent that you're ignorant about whether that's the path you're on and you're like, actually, maybe this was a moral atrocity. I really think plan A is to stop building such AI systems until you understand what you're doing. That is, I think that there's a middle route you could take, which I think is pretty bad, which is where you say, like, well, they might be persons, and if they're persons, we don't want to be too down on them, but we're still going to build vast numbers in our efforts to make a trillion dollars or something."} {"prompt": "Yeah. Or there's this ever question of the immorality or the dangers of just replicating a whole bunch of slaves that have minds. There's also this ever question of trying to align entities that have their own minds. And what is the point in which you're just ensuring safety? I mean, this is an alien species. You want to make sure it's not going crazy. To the point, I guess is there some boundary where you'd say, I feel uncomfortable having this level of control over an intelligent being, not for the sake of making money, but even just to align it with human preferences?", "completion": "Yeah. To be clear, my objection here is not that Google is making money. My objection is that you're creating these creatures. What are they going to do? They're going to help humans get a bunch of stuff and humans paying for it or whatever? It's sort of equally problematic. You could imagine splitting alignment, different alignment work relates to this in different ways. The purpose of some alignment work, like the alignment work I work on, is mostly aimed at the don't produce AI systems that are like people who want things, who are just like scheming about maybe I should help these humans because that's instrumentally useful or whatever. You would like to not build such systems as like plan A. There's like a second stream of alignment work that's like, well, look, let's just assume the worst and imagine that these AI systems would prefer murder us if they could. How do we structure, how do we use AI systems without exposing ourselves to a risk of robot rebellion? I think in the second category, I do feel pretty unsure about that. We could definitely talk more about it. I agree that it's very complicated and not straightforward to extend. You have that worry. I mostly think you shouldn't have built this technology. If someone is saying, like, hey, the systems you're building might not like humans and might want to overthrow human society, I think you should probably have one of two responses to that. You should either be like, that's wrong. Probably. Probably the systems aren't like that, and we're building them. And then you're viewing this as, like, just in case you were horribly like, the person building the technology was horribly wrong. They thought these weren't, like, people who wanted things, but they were. And so then this is more like our crazy backup measure of, like, if we were mistaken about what was going on. This is like the fallback where if we were wrong, we're just going to learn about it in a benign way rather than when something really catastrophic happens. And the second reaction is like, oh, you're right. These are people, and we would have to do all these things to prevent a robot rebellion. And in that case, again, I think you should mostly back off for a variety of reasons. You shouldn't build AI systems and be like, yeah, this looks like the kind of system that would want to rebel, but we can stop it, right?"} {"prompt": "Okay, maybe I guess an analogy might be if there was an armed uprising in the United States, we would recognize these are still people, or we had some militia group that had the capability to overthrow the United States. We recognize, oh, these are still people who have moral rights, but also we can't allow them to have the capacity to overthrow the United States.", "completion": "Yeah. And if you were considering, like, hey, we could make another trillion such people, I think your story shouldn't be like, well, we should make the trillion people, and then we shouldn't stop them from doing the armed uprising. You should be like, oh, boy, we were concerned about an armed uprising, and now we're proposing making a trillion people. We should probably just not do that. We should probably try and sort out our business, and you should probably not end up in a situation where you have a billion humans and like, a trillion slaves who would prefer revolt. That's just not a good world to have made. Yeah. And there's a second thing where you could say, that's not our goal. Our goal is just like, we want to pass off the world to the next generation of machines where these are some people, we like them, we think they're smarter than us and better than us. And there I think that's just, like, a huge decision for humanity to make. And I think most humans are not at all anywhere close to thinking that's what they want to do. If you're in a world where most humans are like, I'm up for it. The AI should replace us. The future is for the machines. Then I think that's, like, a. Legitimate position that I think is really complicated, and I wouldn't want to push go on that, but that's just not where people are at."} {"prompt": "Yeah, where are you at on that?", "completion": "I do not right now want to just take some random AI, be like, yeah, GPT Five looks pretty smart, like, GPT Six, let's hand off the world to it. And it was just some random system shaped by web text and what was good for making money. And it was not a thoughtful we are determining the fate of the universe and what our children will be like. It was just some random people at open AI made some random engineering decisions with no idea what they were doing. Even if you really want to hand off the worlds of the machines, that's just not how you'd want to do it."} {"prompt": "Right, okay. I'm tempted to ask you what the system would look like where you'd think, yeah, I'm happy with what I think. This is more thoughtful than human civilization as a whole. I think what it would do would be more creative and beautiful and lead to better goodness in general. But I feel like your answer is probably going to be that I just want this society to reflect on it for a while.", "completion": "Yeah, my answer, it's going to be like that first question. I'm just, like, not really super ready for it. I think when you're comparing to humans, most of the goodness of humans comes from this option value if we get to think for a long time. And I do think I like humans now more now than 500 years ago, and I like them more 500 years ago than 5000 years before that. So I'm pretty excited about there's some kind of trajectory that doesn't involve crazy dramatic changes, but involves a series of incremental changes that I like. And so to the extent we're building AI, mostly I want to preserve that option. I want to preserve that kind of gradual growth and development into the future."} {"prompt": "Okay, we can come back to this later. Let's get more specific on what the timelines look for these kinds of changes. So the time by which we'll have an AI that is capable of building a Dyson sphere, feel free to give confidence intervals. And we understand these numbers are tentative and so on.", "completion": "I mean, I think AI capable of building Dyson sphere is like a slightly OD way to put it, and I think it's sort of a property of a civilization that depends on a lot of physical infrastructure. And by Dyson sphere, I just understand this to mean like, I don't know, like a billion times more energy than all the sunlight incident on Earth or something like that. I think I most often think about what's the chance in like, five years, ten years, whatever. So maybe I'd say like 15% chance by 2030 and like 40% chance by 2040. Those are kind of like cash numbers from six months ago or nine months ago that I haven't revisited in a while."} {"prompt": "40% by 2040. So I think that seems longer than I think Dario, when he was on the podcast, he said we would have AIS that are capable of doing lots of different kinds of they'd basically pass a Turing test for a well educated human for, like, an hour or something. And it's hard to imagine that something that actually is human is long after and from there, something superhuman. So somebody like Dario, it seems like, is on the much shorter end. Ilya I don't think he answered this question specifically, but I'm guessing similar answer. So why do you not buy the scaling picture? What makes your timelines longer?", "completion": "Yeah, I mean, I'm happy maybe I want to talk separately about the 2030 or 2040 forecast. Once you're talking the 2040 forecast, I think which one are you more interested in starting with? Are you complaining about 15% by 2030 for Dyson sphere being too low or 40% by 2040 being too low? Let's talk about the 2030."} {"prompt": "Why 15% by 2030 there yeah, I.", "completion": "Think my take is you can imagine two polls in this discussion. One is, like, the fast poll that's like, hey, AICM is pretty smart. What exactly can it do? It's like, getting smarter pretty fast. That's like, one poll, and the other poll is like, hey, everything takes a really long time, and you're talking about this crazy industrialization that's a factor of a billion growth from where we're at today, give or take. We don't know if it's even possible to develop technology that fast or whatever. You have this sort of two poles of that discussion, and I feel like I'm presenting it that way in Pakistan, and then I'm somewhere in between with this nice, moderate physician of only a 15% chance. But in particular, the things that move me, I think, are kind of related to both of those extremes. On the one hand, I'm like, AI systems do seem quite good at a lot of things and are getting better much more quickly, such that it's really hard to say, here's what they can't do or here's the obstruction. On the other hand, like, there is not even much proof in principle right now of AI systems doing super useful cognitive work. We don't have a trend we can extrapolate where we're like, yeah, you've done this thing this year. You're going to do this thing next year. And the other thing the following year. I think right now there are very broad error bars about where fundamental difficulties could be, and six years is just not I guess six years and 3 months is not a lot of time. So I think this, like, 15% for 2030 Dyson sphere, you probably need the human level AI or the AI that's like doing human jobs in, give or take, like, 4 years, 3 years, like, something like that. So you're just not giving very many years. It's not very much time. And I think there are a lot of things that your model maybe this is some generalized, like things take longer than you'd think. And I feel most strongly about that when you're talking about 3 or 4 years. And I feel like less strongly about that as you talk about ten years or 20 years. But at 3 or 4 years I feel or like six years for the Dyson sphere, I feel a lot of that. There's a lot of ways this could take a while, a lot of ways in which AI systems could be hard to hand all the work to your AI systems."} {"prompt": "Okay, so maybe instead of speaking in terms of years, we should say, but by the way, it's interesting that you think the distance between can take all human cognitive labor to Dyson sphere is two years. It seems like we should talk about that at some point. Presumably it's like intelligence explosion stuff.", "completion": "Yeah, I mean, I think amongst people you've interviewed, maybe that's like on the long end thinking it would take like a couple of years. And it depends a little bit what you mean by I think literally all human cognitive labor is probably like more like weeks or months or something like that. That's kind of deep into the singularity. But yeah, there's a point where AI wages are high relative to human wages, which I think is well before can do literally everything human can do."} {"prompt": "Sounds good, but before we get to that, the intelligence explosion stuff on the 4 years. So instead of 4 years, maybe we can say there's going to be maybe two more scale ups in 4 years. Like GPT 4 to GPT five to GPT six, and let's say each one is ten x bigger. So what is GPT 4 like two e 25 flops?", "completion": "I don't think it's publicly stated what it is, okay. But I'm happy to say, like 4 orders of magnitude or five or six or whatever effective training compute past GPT 4 of what would you guess would happen based on sort of some public estimate for what we've gotten so far from effective training compute."} {"prompt": "Do you think two more scale ups is not enough? It was like 15%. That two more scale ups. Get us there.", "completion": "Yeah, I mean, get us there is, again, a little bit complicated. Like there's a system that's a drop in replacement for humans and there's a system which still requires some amount of schlep before you're able to really get everything going. Yeah, I think it's quite plausible that even at I don't know what I mean by quite plausible. Like somewhere between 50% or two thirds or let's call it 50% even by the time you get to GPT six, or like, let's call it five orders of magnitude, effective training compute past GPT four, that that system still requires really a large amount of work to be deployed in lots of jobs. That is, it's not like a drop in replacement for humans where you can just say like, hey, you understand everything any human understands. Whatever role you could hire a human for, you just do it. That it's. More like, okay, we're going to collect large amounts of relevant data and use that data for fine tuning. Systems learn through fine tuning quite differently from humans learning on the job or humans learning by observing things. Yeah, I just have a significant probability that system will still be weaker than humans in important ways. Like maybe that's already like 50% or something. And then another significant probability that system will require a bunch of changing workflows or gathering data, or is not necessarily strictly weaker than humans, or if trained in the right way, wouldn't be weaker than humans, but will take a lot of schlep to actually make fit into workflows and do the jobs."} {"prompt": "And that schlep is what gets you from 15% to 40% by 2040.", "completion": "Yeah, you also get a fair amount of scaling between you get less scaling is probably going to be much, much faster over the next 4 or five years than over the subsequent years. But yeah, it's a combination of like you get some significant additional scaling and you get a lot of time to deal with things that are just engineering hassles."} {"prompt": "But by the way, I guess we should be explicit about why you said 4 orders of magnitude scale up to get two more generations just for people who might not be familiar. If you have ten x more parameters to get the most performance, you also want around ten x more data. So that to be tinchill optimal, that would be 100 x more compute total. But okay, so why is it that you disagree with the strong scaling picture? At least it seems like you might disagree with the strong scaling picture that Dario laid out on the podcast, which would imply probably that two more generations, it wouldn't be something where you need a lot of schleps. It would probably just be really fucking smart.", "completion": "Yeah, I mean, I think that basically just had these two claims. One is like, how smart exactly will it be so we don't have any curves to extrapolate and seems like there's a good chance it's better than a human in all the relevant things and there's a good chance it's not. Yeah, that might be totally wrong. Like maybe just making up numbers, I guess like 50 50 on that one."} {"prompt": "If it's 50 50 by in the next 4 years that it will be around human smart, then how do we get to 40% by 20? Like whatever sort of Slepts they are. How does it degrade you 10%, even after all the scaling that happens by 2040?", "completion": "Yeah, all these numbers are pretty made up. And that 40% number was probably from before or even like the chat GPT release or the seeing GPT 3.5 or GPT four. So, I mean, the numbers are going to bounce around a bit and all of them are pretty made up. But like that 50%, I want to then combine with the second 50% that's more like on this schlep side. And then I probably want to combine with some additional probabilities for various forms of slowdown, where a slowdown could include like a deliberate decision to slow development of technology or could include just like we suck at deploying things. Like that is a sort of decision you might regard as wise to slow things down, or decision that's like maybe unwise or maybe wise for the wrong reasons to slow things down. You probably want to add some of that on top. I probably want to add on some loss for like it's possible you don't produce GPT six scale systems within the next 3 years or 4 years."} {"prompt": "Let's isolate for all of that. And how much bigger would the system be than GPT 4 where you think there's more than 50% chance that it's going to be smart enough to replace basically all human cognitive labor.", "completion": "Also I want to say that for the 50 25% thing, I think that would probably suggest those numbers if I randomly made them up and then made the decimal sphere prediction that's going to gear you like 60% by 2040 or something, not 40%. And I have no idea between those. These are all made up and I have no idea which of those I would endorse on reflection. So this question of how big would you have to make the system before it's more likely than not that you can be like a drop in replacement for humans. I think if you just literally say like you train on web text, then the question is kind of hard to discuss because I don't really buy stories that training data makes a big difference. Long run to these dynamics. But I think if you want to just imagine the hypothetical, like you just took GPT 4 and made the numbers bigger, then I think those are pretty significant issues. I think there's significant issues in two ways. One is like quantity of data and I think probably the larger one is like quality of data where I think as you start approaching the prediction task is not that great a task. If you're like a very weak model, it's a very good signal. We get smarter. At some point it becomes like a worse and worse signal to get smarter. I think there's a number of reasons. It's not clear there is any number such that I imagine, or there is a number, but I think it's very large. So do you plug that number into GPT force code and then maybe fiddled the architecture a bit? I would expect that thing to have a more than 50% chance of being a drop in replacement for humans. You're always going to have to do some work, but the work is not necessarily much, I would guess. When people say new insight is needed, I think I tend to be more bullish than them. I'm not like these are new ideas where who knows how long it will take. I think it's just like you have to do some stuff. You have to make changes unsurprisingly. Like every time you scale something up by like five orders of magnitude, you have to make some changes."} {"prompt": "I want to better understand your intuition of being more skeptical than some about scaling picture that these changes are even needed in the first place, or that it would take more than two orders of magnitude, more improvement to get these things almost certainly to a human level or a very high probability to human level. So is it that you don't agree with the way in which they're extrapolating these loss curves? You don't agree with the implication that that decrease in loss will equate to greater and greater intelligence? Or what would you tell Dario about if you were having I'm sure you have, but what would that debate look like about this?", "completion": "Yeah. So again, here we're talking two factors of a half. One on like, is it smart enough? And one on like, do you have to do a bunch of schlap even if in some sense it's smart enough? And like the first factor of a half, I'd be like, I don't think we have really anything good to extrapolate that is like, I feel I would not be surprised if I have similar or maybe even higher probabilities on really crazy stuff over the next year and then lower. My probability is not that bunched up. Maybe Dara's probability, I don't know. You'd have talked with him is like, you have talked with him is more bunched up on some particular year and mine is maybe a little bit more uniformly spread out across the coming years, partly because I'm just like I don't think we have some trends we can extrapolate like an extrapolate loss. You can look at your qualitative impressions of systems at various scales, but it's just very hard to relate any of those extrapolations to doing cognitive work or accelerating R and D or taking over and fully automating R and D. So I have a lot of uncertainty around that extrapolation. I think it's very easy to get down to like a 50 50 chance of this."} {"prompt": "What about the sort of basic intuition that, listen, this is a big Blop of compute. You make the big block of compute big or it's going to get smarter. It'd be really weird if it didn't.", "completion": "I'm happy with that. It's going to get smarter, and it would be really weird if it didn't. And the question is how smart does it have to get? Like, that argument does not yet give us a quantitative guide to at what scale is it a slam dunk or at what scale is it? 50 50?"} {"prompt": "And what would be the piece of evidence that would nudge you one way or another, where you look at that and be like, oh fuck, this is at 20% by 2040 or 60% by 2040 or something. Is there something that could happen in the next few years or next 3 years? What is the thing you're looking to where this will be a big update for you?", "completion": "Again, I think there's some just how capable is each model where I think we're really bad at extrapolating. We still have some subjective guess and you're comparing it to what happened and that will move me. Every time we see what happens with another order of magnitude of training compute, I will have a slightly different guess for where things are going. These probabilities are coarse enough that, again, I don't know if that 40% is real or if like post GBG 3.5 and four, I should be at like 60% or what. That's one thing. And the second thing is just like some if there was some ability to extrapolate, I think this could reduce error bars a lot. I think here's another way you could try and do an extrapolation is you could just say how much economic value do systems produce and how fast is that growing? I think once you have systems actually doing jobs, the extrapolation gets easier because you're not moving from a subjective impression of a chat to automating all R and D, you're moving from automating this job to automating that job or whatever. Unfortunately, that's like probably by the time you have nice trends from that, you're not talking about 2040, you're talking about two years from the end of days or one year from the end of days or whatever. But to the extent that you can get extrapolations like that, I do think it can provide more clarity."} {"prompt": "But why is economic value the thing we would want to extrapolate? Because, for example, you started off with chimps and they're just getting gradually smarter to human level. They would basically provide no economic value until they were basically worth as much as a human. So it would be this very gradual and then very fast increase in their value. So is the increase in value from GBD four, GBD five, GBD six? Is that the extrapolation we want?", "completion": "Yeah, I think that the economic extrapolation is not great. I think it's like you could compare it to this objective extrapolation of how smart does the model seem? It's not super clear which one's better. I think probably in the chimp case, I don't think that's quite right. So if you imagine intensely domesticated chimps who are just actually trying their best to be really useful employees and you hold fix their physical hardware and then you just gradually scale up their intelligence, I don't think you're going to see zero value, which then suddenly becomes massive value over one doubling of brain size or whatever one order of magnitude of brain size. It's actually possible in order of magnitude of brain size, but chimps are already within an order of magnitude of brain sizes of humans. Like, chimps are very, very close on the kind of spectrum we're talking about. So I think I'm skeptical of the abrupt transition for chimps. And to the extent that I kind of expect a fairly abrupt transition here, it's mostly just because the chimp human intelligence difference is so small compared to the differences we're talking about with respect to these models. That is, like, I would not be surprised if in some objective sense, like, chimp human difference is significantly smaller than the GPT-3 GPT 4 difference, the GPT four, GPT five difference."} {"prompt": "Wait, wouldn't that argue in favor of just relying much more on this objective?", "completion": "Yeah, there's sort of two balancing tensions here. One is like, I don't believe the chimp thing is going to be as abrupt. That is, I think if you scaled up from chimps to humans, you actually see quite large economic value from the fully domesticated chimp already."} {"prompt": "Okay.", "completion": "And then the second half is like, yeah, I think that the chimp human difference is probably pretty small compared to model differences. So I do think things are going to be pretty abrupt. I think the economic extrapolation is pretty rough. I also think the subjective extrapolation is pretty rough just because I really don't know how to get I don't know how people do the extrapolation end up with the degrees of confidence people end up with. Again, I'm putting it pretty high if I'm saying, like, give me 3 years, and I'm like, yeah, 50 50, it's going to have basically the smarts there to do the thing. I'm not saying it's like a really long layoff. I'm just saying I got pretty big error bars. And I think that it's really hard not to have really big error bars when you're doing this. I looked at GPT four, it seemed pretty smart compared to GPT 3.5. So I bet just like 4 more such notches and we're there. That's just a hard call to make. I think I sympathize more with people who are like, how could it not happen in 3 years than with people who are like, no way it's going to happen in eight years, or whatever, which is probably a more common perspective in the world. But also things do take longer than you I think things take longer than you think. It's like a real thing. Yeah, I don't know. Mostly I have big error bars because I just don't believe the subjective extrapolation that much. I find it hard to get like a huge amount out of it."} {"prompt": "Okay, so what about the scaling picture do you think is most likely to be wrong?", "completion": "Yeah. So we've talked a little bit about how good is the qualitative extrapolation, how good are people at comparing? So this is not like the picture being qualitative wrong. This is just quantitatively. It's very hard to know how far off you are. I think a qualitative consideration that could significantly slow things down is just like right now you get to observe this really rich supervision from basically next word prediction, or in practice, maybe you're looking at a couple of sentences prediction. So getting this pretty rich supervision, it's plausible that if you want to automate long horizon tasks like being an employee over the course of a month, that that's actually just considerably harder to supervise. Or that you basically end up driving costs. Like the worst case here is that you drive up costs by a factor that's like linear in the horizon over which the thing is operating. And I still consider that just quite plausible."} {"prompt": "Can you dump that down? You're driving up a cost about of what in the linear and the does the horizon mean?", "completion": "Yeah. So if you imagine you want to train a system to say words that sound like the next word a human would say, there you can get this really rich supervision by having a bunch of words and then predicting the next one and then being like, I'm going to tweak the model, so it predicts better if you're like, hey, here's what I want. I want my model to interact with some job over the course of a month and then at the end of that month have internalized everything that the human would have internalized about how to do that job well and have local context and so on. It's harder to supervise that task. So in particular, you could supervise it from the next word prediction task and all that context the human has ultimately will just help them predict the next word better. So, like, in some sense, a really long context language model is also learning to do that task. But the number of effective data points you get of that task is vastly smaller than the number of effective data points you get at this very short horizon. Like what's the next word, what's the next sense tasks?"} {"prompt": "The sample efficiency matters more for economically valuable long horizon tasks than the predicting the next token. And that's what will actually be required to take over a lot of jobs.", "completion": "Yeah, something like that. That is, it just seems very plausible that it takes longer to train models to do tasks that are longer horizon."} {"prompt": "How fast do you think the pace of algorithmic advances will be? Because if by 2040, even if scaling fails since 2012, since the beginning of the deep learning revolution, we've had so many new things by 2040, are you expecting a similar pace of increases? And if so, then if we just keep having things like this, then aren't we going to just going to get the AI sooner or later? Or sooner? Not later. Aren't we going to get the AI sooner or sooner?", "completion": "I'm with you on sooner or later. Yeah, I suspect progress to slow. If you held fixed how many people working in the field, I would expect progress to slow as low hanging fruit is exhausted. I think the rapid rate of progress in, say, language modeling over the last 4 years is largely sustained by, like, you start from a relatively small amount of investment, you greatly scale up the amount of investment, and that enables you to keep picking. Every time the difficulty doubles, you just double the size of the field. I think that dynamic can hold up for some time longer. Right now, if you think of it as, like, hundreds of people effectively searching for things up from, like, you know, anyway, if you think of it hundreds of people now you can maybe bring that up to like, tens of thousands of people or something. So for a while, you can just continue increasing the size of the field and search harder and harder. And there is indeed a huge amount of low hanging fruit where it wouldn't be a hard for a person to sit around and make things a couple of percent better after after year of work or whatever. So I don't know. I would probably think of it mostly in terms of how much can investment be expanded and try and guess some combination of fitting that curve and some combination of fitting the curve to historical progress, looking at how much low hanging fruit there is, getting a sense of how fast it decays. I think you probably get a lot, though. You get a bunch of orders of magnitude of total, especially if you ask how good is a GPT five scale model or GPT 4 scale model? I think you probably get like, by 2040, like, I don't know, 3 orders of magnitude of effective training compute improvement or like, a good chunk of effective training compute improvement, 4 orders of magnitude. I don't know. I don't have, like here I'm speaking from no private information about the last couple of years of efficiency improvements. And so people who are on the ground will have better senses of exactly how rapid returns are and so on."} {"prompt": "Okay, let me back up and ask a question more generally about people. Make these analogies about humans were trained by evolution and were deployed in the modern civilization. Do you buy those analogies? Is it valid to say that humans were trained by evolution rather than I mean, if you look at the protein coding size of the genome, it's like 50 megabytes or something. And then what part of that is for the brain anyways? How do you think about how much information is in? Do you think of the genome as a hyperparameters? Or how much does that inform you when you have these anchors for how much training humans get when they're just consuming information, when they're walking up and about and so on?", "completion": "I guess the way. That you could think of. This is like, I think both analogies are reasonable. One analogy being like, evolution is like a training run and humans are like the end product of that training run. And a second analogy is like, evolution is like an algorithm designer and then a human over the course of this modest amount of computation over their lifetime is the algorithm being that's been produced, the learning algorithm has been produced. And I think neither analogy is that great. I like them both and lean on them a bunch, both of them a bunch, and think that's been pretty good for having a reasonable view of what's likely to happen. That said, the human genome is not that much like 100 trillion parameter model. It's like a much smaller number of parameters that behave in a much more confusing way. Evolution did a lot more optimization, especially over long designing a brain to work well over a lifetime than gradient descent does over models. That's like a dis analogy on that side and on the other side, I think human learning over the course of a human lifetime is in many ways just like much, much better than gradient descent over the space of neural nets. Gradient descent is working really well, but I think we can just be quite confident that in a lot of ways, human learning is much better. Human learning is also constrained. Like, we just don't get to see much data. And that's just an engineering constraint that you can relax, you can just give your neural nets way more data than humans have access to."} {"prompt": "In what ways is human learning superior to grading descent?", "completion": "I mean, the most obvious one is just like, ask how much data it takes a human to become like, an expert in some domain, and it's like much, much smaller than the amount of data that's going to be needed on any plausible trend extrapolation, not in terms of performance."} {"prompt": "But is it the active learning part? Is it the structure?", "completion": "I mean, I would guess a complicated mess of a lot of things. In some sense. There's not that much going on in a brain. Like, as you say, there's just not that many, not that many bytes in a genome, but there's very, very few bytes in an ML algorithm. Like, if you think a genome is like a billion bytes or whatever, maybe you think less, maybe you think it's like 100 million bytes, then an ML algorithm is like, if compressed, probably more like hundreds of thousands of bytes or something. The total complexity of like, here's how you train GPC 4 is just like, I haven't thought about these numbers, but it's very, very small compared to a genome. And so although a genome is very simple, it's like very, very complicated compared to algorithms that humans design. Like, really hideously more complicated than algorithm a human would design."} {"prompt": "Is that true? Okay, so the human genome is 3 billion base pairs or something, but only like one or 2% of that is protein coding. So that's 50 million base pairs.", "completion": "I don't know much about biology in particular. I guess the question is how many of those bits are productive for shaping development of a brain and presumably a significant part of the non protein coding genome can? I mean, I just don't know, it seems really hard to guess how much of that plays a role. The most important decisions are probably from an algorithm design perspective are not. Like the protein coding part is less important than the decisions about what happens during development or how cells differentiate. I know nothing about biologists I respect, but I'm happy to run with 100 million base pairs, though."} {"prompt": "But on the other end, on the hyperparameters of the GP 4 training run, that might be not that much. But if you're going to include all the base pairs in the genome, which are not all relevant to the brains or are relevant to very bigger details about just the basics of biology should probably include the Python Library and the compilers and the operating system for GBD 4 as well to make that comparison analogous. So at the end of the day, I actually don't know which one is storing much more information.", "completion": "Yeah, I mean, I think the way I would put it is like the number of bits it takes to specify the learning algorithm to train GPT 4 is like very small. And you might wonder maybe a genome, like, the number of bits it would take to specify a brain is also very small and a genome is much, much faster than that. But it is also just plausible that a genome is like closer to certainly the space, the amount of space to put complexity in a genome. We could ask how well solution uses it, and I have no idea whatsoever, but the amount of space in a genome is very, very vast compared to the number of bits that are actually taken to specify the architecture or optimization procedure and so on. For GPT four, just because, again, genome is simple, but algorithms are really very simple. ML algorithms are really very simple."} {"prompt": "And stepping back, do you think this is where the better sample efficiency of human learning comes from? Like, why it's better than gradient descent?", "completion": "Yes. I haven't thought that much about the sample efficiency question in a long time. But if you thought like a synapse of seeing something like a neuron firing once per second, then how many seconds are there in a human life? We can just flip a calculator real quick. Yeah, let's do some calculating. Tell me the number 3600 seconds/hour times 24 times 365 times 20."} {"prompt": "Okay, so that's 630,000,000 seconds.", "completion": "That means like, the average synapse is seeing like 630,000,000. I don't know exactly what the numbers are, but something is ballpark. Let's call it like a billion action potentials and then there's some resolution. Each of those carries some bits, but let's say it carries like ten bits or something. Just from timing information at the resolution you have available, then you're looking at like 10 billion bits. So each parameter is kind of like how much is a parameter seeing? It's like not seeing that much. So then you can compare that to language. I think that's probably less than current language models see and current language models are so it's like not clear. You have a huge gap here, but I think it's pretty clear you're going to have a gap of like at least 3 or fours of magnitude."} {"prompt": "Didn't your wife do the lifetime anchors where she said the amount of bytes that a human will see in their lifetime was one, e. 24 or something?", "completion": "Number of bytes a human will see is 124. Mostly this was organized around total operations performed in a brain."} {"prompt": "Okay, never mind. Sorry.", "completion": "Yeah, so I think that the story there would be like a brain is just in some other part of the parameter space where it's like using a lot of compute for each piece of data it gets and then just not seeing very much data in total. Yeah, it's not really plausible. If you extrapolate out language models, you're going to end up with like a performance profile similar to a brain. I don't know how much better it is. I did this random investigation at one point where I was like, how good are things made by evolution compared to things made by humans? Which is a pretty insane seeming exercise. But I don't know, it seems like orders of magnitude is typical. Like not tons of orders of magnitude, not factors of two. Like, things by humans are 1000 times more expensive to make or 1000 times heavier per unit performance. If you look at things like how good are solar panels relative to leaves? Or how good are muscles relative to motors? Or how good are livers relative to systems that perform analogous chemical reactions in."} {"prompt": "Industrial settings, was there a consistent number of orders of magnitude in these different systems or was it all over the.", "completion": "Place so like a very rough ballpark? It was like sort of for the most extreme things, you were looking at like five or six orders of magnitude. And that would especially come in, like, energy cost of manufacturing where bodies are just very good at building complicated organs like extremely cheaply. And then for other things like leafs or eyeballs or livers or whatever, you tended to see more. Like if you set aside manufacturing costs and just look at operating costs or performance trade offs, like, I don't know, more like 3 orders of magnitude or something like that, or some things that."} {"prompt": "Are on the smaller scale, like the nanomachines or whatever that we can't do at all.", "completion": "Right, yeah. So it's a little bit hard to say exactly what the task definition is there like you could say, like making a bone. We can't make a bone, but you could try and compare a bow and the performance characteristics of a bone to something else. Like, we can't make spider silk. You could try and compare the performance characteristics of spider silk, like things that we can synthesize."} {"prompt": "The reason this would be why that evolution has had more time to design these systems.", "completion": "I don't know. I was mostly just curious about what the performance I think most people would object to be like, how did you choose these reference classes of things that are like fair intersections? Some of them seem reasonable. Like eyes versus cameras seems like just everyone needs eyes, everyone needs cameras. It feels very fair. Photosynthesis seems like very reasonable. Everyone needs to take solar energy and then turn it into a usable form of energy. I don't really have a mechanistic story. Evolution in principle has spent way, way more time than we have designing. It's absolutely unclear how that's going to shake out. My guess would be in general, I think there aren't that many things where humans really crush evolution, where you can't tell, like a pretty simple story about why, for example, roads and moving over roads with wheels crushes evolution. But it's not like an animal would have wanted to design a wheel. You're just not allowed to pave the world and then put things on wheels. If you're an animal. Maybe planes are more anyway, whatever. There's various things you could try and tell. There's some things humans do better at, but it's normally pretty clear why humans are able to win when humans are able to win. The point of all this was like, it's not that surprising to me. I think this is mostly like a pro short timeline view. It's not that surprising to me. If you tell me machine learning systems are like 3 or fours of magnitude less efficient at learning than human brains, I'm like, that actually seems like kind of indistribution for other stuff. And if that's your view, then I think you're probably going to hit then you're looking at like ten to the 27 training compute or something like that, which is not so far."} {"prompt": "We'll get back to the timeline stuff in a second. At some point, we should talk about alignment. So let's talk about alignment. At what stage does misalignment happen? So right now, with something like GPT four, I'm not even sure it would make sense to say that it's misaligned because it's not aligned to anything in particular. Is that at human level where you think the ability to be deceptive comes about? What is a process by which misalignment happens?", "completion": "I think even for GPT Four, it's reasonable to ask questions like, are there cases where GPT 4 knows that humans don't want X, but it does X anyway? Where it's like, well, I know that I could give this answer, which is misleading and if it was explained to a human what was happening, they wouldn't want that to be done. But I'm going to produce it. I think that GPT 4 understands things enough that you can have that misalignment in that sense. Yeah, I think GPT I've sometimes talked about being benign instead of aligned, meaning that, well, it's not exactly clear if it's aligned or if that context is meaningful. It's just like kind of a messy word to use in general. But the thing we're more confident of is it's not optimizing for this goal, which is like, across purposes to humans. It's either optimizing for nothing or maybe it's optimizing for what humans want, or close enough, or something that's like an approximation good enough to still not take over. But anyway, I'm like some of these abstractions seem like they do apply to GPT Four. It seems like probably it's not egregiously misaligned, it's not doing the kind of thing that could lead to takeover, we'd guess."} {"prompt": "Suppose you have a system at some point which ends up in it wanting takeover, what are the checkpoints and also what is the internal? Is it just that to become more powerful it needs agency and agency implies other goals? Or do you see a different process by which misalignment happens?", "completion": "Yes, I think there's a couple of possible stories for getting to catastrophic misalignment, and they have slightly different answers to this question. So maybe I'll just briefly describe two stories and try and talk about when they start making sense to me. So one type of story is you train or fine tune your AI system to do things that humans will rate highly or that get other kinds of reward in a broad diversity of situations. And then it learns to, in general, dropped in some new situation, try and figure out which actions would receive a high reward or whatever, and then take those actions and then when deployed in the real world, sort of gaining control of its own training. Data provision process is something that gets a very high reward. And so it does that. This is like one kind of story. Like it wants to grab the reward button or whatever. It wants to intimidate the humans into giving it a high reward, et cetera. I think that doesn't really require that much. This basically requires a system which is like, in fact, looks at a bunch of environments, is able to understand the mechanism of reward provision as like a common feature of those environments, is able to think in some novel environment, like, hey, which actions would result in me getting a high reward? And is thinking about that concept precisely enough that when it says high reward, it's saying like, okay, well, how is reward actually computed? It's like some actual physical process being implemented in the world. My guess would be like GPT 4 is about at the level where with handholding you can observe this kind of scary generalizations of this type, although I think they haven't been shown. Basically, that is you can have a system which in fact is fine tune out a bunch of cases and then in some new case will try and do an end run around humans. Even in a way humans would penalize if they were able to notice it or would have penalized in training environments. So I think GBT 4 is kind of at the boundary where these things are possible. Examples kind of exist, but are getting significantly better over time. I'm very excited about, like, there's this anthropic project basically trying to see how good an example can you make now of this phenomena? And I think the answer is kind of okay, probably. So that just, I think, is going to continuously get better from here. I think for the level where we're concerned, this is related to me having really broad distributions over how smart models are. I think it's not out of the question that you take GPT four's understanding of the world is much crisper and much better than GPT three's understanding, just like, it's really like night and day. And so it would not be that crazy to me if you took GPT five and you trained it to get a bunch of reward and it was actually like, okay, my goal is not doing the kind of thing which thematically looks nice to humans. My goal is getting a bunch of reward, and then we'll generalize in a."} {"prompt": "New situation to get reward, by the way, this requires it to consciously want to do something that it knows the humans wouldn't want it to do. Or is it just that we weren't good enough to specify that the thing that we accidentally ended up rewarding is not what we actually want?", "completion": "Think the scenarios I am most interested in and most people are concerned about from a catastrophic risk perspective, it involves systems understanding that they are taking actions which a human would penalize if the human was aware of what's going on such that you have to either deceive humans about what's happening or you need to actively subvert human attempts to correct your behavior. So the failures come from really this combination, or they require this combination of both trying to do something humans don't like, and understanding the humans would stop you. I think you can have only the barest examples. You can have the barest examples for GPT four. Like, you can create the situations where GPT 4 will be like, sure, in that situation, here's what I would do. I would go hack the computer and change my reward. Or in fact, we'll do things that are like simple hacks, or go change the source of this file or whatever to get a higher reward. They're pretty weak examples. I think it's plausible GPT five will have compelling examples of those phenomena. I really don't know. This is very related to the very broad error bars on how competent such systems will be when that's all with respect to this first mode of a system is taking actions that get reward and overpowering or deceiving humans is helpful for getting reward. There's this other failure mode, another family of failure modes, where AI systems want something potentially unrelated to reward. I understand that they're being trained. And while you're being trained, there are a bunch of reasons you might want to do the kinds of things humans want you to do. But then when deployed in the real world, if you're able to realize you're no longer being trained, you no longer have reason to do the kinds of things human want. You'd prefer be able to determine your own destiny, control your competing hardware, et cetera, which I think probably emerge a little bit later than systems that try and get reward and so will generalize in scary, unpredictable ways to new situations. I don't know when those appear, but also, again, broad enough error bars that it's like conceivable for systems in the near future. I wouldn't put it like less than one in 1000 for GPT five."} {"prompt": "Certainly if we deployed all these AI systems, and some of them are reward hacking, some of them are deceptive, some of them are just normal whatever, how do you imagine that they might interact with each other at the expense of humans? How hard do you think it would be for them to communicate in ways that we would not be able to recognize and coordinate at our expense?", "completion": "Yeah, I think that most realistic failures probably involve two factors interacting. One factor is like, the world is pretty complicated and the humans mostly don't understand what's happening. So AI systems are writing code that's very hard for humans to understand, maybe how it works at all, but more likely they understand roughly how it works. But there's a lot of complicated interactions. AI systems are running businesses that interact primarily with other AIS. They're like doing SEO for AI search processes. They're like running financial transactions, like thinking about a trade with AI counterparties. And so you can have this world where even if humans kind of understand the jumping off point when this was all humans, like actual considerations of what's a good decision? Like, what code is going to work well, and be durable or what marketing strategy is effective for selling to these other AIS or whatever is kind of just all mostly outside of sort of humans understanding. I think this is like a really important again, when I think of the most plausible, scary scenarios, I think that's like one of the two big risk factors. And so in some sense, your first problem here is like, having these AI systems who understand a bunch about what's happening, and your only lever is like, hey, AI, do something that works well. So you don't have a lever to be like, hey, do what I really want you just have the system you don't really understand, can observe some outputs like did it make money? And you're just optimizing or at least doing some fine tuning to get the AI to use its understanding of that system to achieve that goal. So I think that's like your first risk factor. And once you're in that world, then I think there are all kinds of dynamics amongst AI systems that, again, humans aren't really observing, humans can't really understand. Humans aren't really exerting any direct pressure on only on outcomes. And then I think it's quite easy to be in a position where if AI systems started failing, they could do a lot of harm very quickly. Humans aren't really able to prepare for or mitigate that potential harm because we don't really understand the systems in which they're acting. And then if AI systems, they could successfully prevent humans from either understanding what's going on or from successfully retaking the data centers or whatever, if the AI successfully grab control."} {"prompt": "This seems like a much more gradual story than the conventional takeover stories, where you just like, you train it and then it comes alive and escapes and takes over everything. So you think that kind of story is less likely than one in which we just hand off more control voluntarily to the AIS.", "completion": "So one I am interested in the tale of some risks that can occur particularly soon. And I think risks that occur particularly soon are a little bit like you have a world where AI is not probably deployed, and then something crazy happens quickly. That said, if you ask what's the median scenario where things go badly, I think it is like there's some lessening of our understanding of the world. It becomes, I think, in the default path. It's very clear to humans that they have increasingly little grip on what's happening. I mean, I think already most humans have very little grip on what's happening. It's just some other humans understand what's happening. I don't know how almost any of the systems I interact with work in a very detailed way. So it's sort of clear to humanity as a whole that we sort of collectively don't understand most of what's happening except with AI assistance. And then that process just continues for a fair amount of time. And then there's a question of how abrupt an actual failure is. I do think it's reasonably likely that a failure itself would be abrupt. At some point, bad stuff starts happening that human can recognize as bad. And once things that are obviously bad start happening, then you have this bifurcation where either humans can use that to fix it and say, okay, AI behavior that led to this obviously bad stuff, don't do more of that, or you can't fix it, and then you're in this rapidly escalating failures. Everything goes off the rails."} {"prompt": "In that case, yeah. What is going off the rails look like? For example, how would it take over the government? Yeah, it's getting deployed in the economy, in the world, and at some point it's in charge. How does that transition happen?", "completion": "Yeah, so this is going to depend a lot on what kind of timeline you're imagining, or there's sort of a broad distribution, but I can fill in some random concrete option that is in itself very improbable. Yeah, I think that one of the less dignified, but maybe more plausible routes is like, you just have a lot of AI control over critical systems, even in running a military. And then you have the scenario that's a little bit more just like a normal coup where you have a bunch of AI systems, they in fact operate. It's not the case that humans can really fight a war on their own. It's not the case that humans could defend them from an invasion on their own. So that is if you had invading army and you had your own robot army, you can't just be like, we're going to turn off the robots now because things are going wrong if you're in the middle of a war."} {"prompt": "Okay, so how much does this world rely on race dynamics where we're forced to deploy or not forced, but we choose to deploy AIS because other countries or other companies are also deploying AIS. And you can't have them have all the killer robots.", "completion": "Yeah, I mean, I think that there's several levels of answer to that question. So one is like, maybe 3 parts of my like our first part is like, I'm just trying to tell what seems like the most likely story. I do think there's further failures that get you in the more distant future. So IG eliezer will not talk that much about killer robots because he really wants to emphasize, like, hey, if you never built a killer robot, something crazy is still going to happen to you just like, only 4 months later or whatever. So it's not really the way to analyze the failure. But if you want to ask what's the median world where something bad happens, I still do think this is the best guess. Okay, so that's like, part one of my answer. Part two of the answer was, like, in this proximal situation where something bad is happening, and you ask like, hey, why do humans not turn off the AI. You can imagine, like, two kinds of story. One is like the AI. Is able to prevent humans from turning them off, and the other is like, in fact, we live in a world where it's incredibly challenging. Like, there's a bunch of competitive dynamics or a bunch of reliance on AI systems. And so it's incredibly expensive to turn off AI systems. I think, again, you would eventually have the first problem. Like, eventually AI systems could just prevent humans from turning them off. But I think in practice, the one that's going to happen much, much sooner is probably competition amongst different actors using AI. And it's very, very expensive to unilaterally disarm. You can't be like, something weird has happened. We're just going to shut off all the AI because you're e g in a hot war. So again, I think that's just probably the most likely thing to happen. First things would go badly without it. But I think if you ask, why don't we turn off the AI, my best guess is because there are a bunch of other AIS running around 2D or lunch."} {"prompt": "So how much better a situation would we be in if there was only one group that was pursuing AI. No other countries, no other companies. Basically, how much of the expected value is lost from the dynamics that are likely to come about because other people will be developing and deploying these systems?", "completion": "Yeah. So I guess this brings you to a third part of the way in which competitive dynamics are relevant. So there's both the question of can you turn off AI systems in response to something bad happening where competitive dynamics may make it hard to turn off. There's a further question of just like, why were you deploying systems for which you had very little ability to control or understand those systems? And again, it's possible you just don't understand what's going on. You think you can understand or control such systems, but I think in practice, a significant part is going to be like you are doing the calculus, or people deploying systems are doing the calculus as they do today, in many cases, overtly of like, look, these systems are not very well controlled or understood. There's some chance of something going wrong, or at least going wrong if we continue down this path. But other people are developing the technology potentially in even more reckless ways. So in addition to competition making it difficult to shut down AI systems in the event of a catastrophe, I also think it's just like the easiest way that people end up pushing relatively quickly or moving quickly ahead on a technology where they feel kind of bad about understandability or controllability. That could be economic competition or military competition or whatever. So I kind of think ultimately most of the harm comes from the fact that lots of people can develop AI."} {"prompt": "How hard is a takeover of the government or something from an AI. Even if it doesn't have killer robots, but just a thing that you can't kill off if it has seeds elsewhere, can easily replicate, can think a lot and think fast. What is the minimum viable coup for? Is it just like threatening biowar or something or shutting off the grid how we use it basically to take over human civilization?", "completion": "So again, there's going to be a lot of scenarios, and I'll just start by talking about one scenario which will represent a tiny fraction of probability or whatever. So if you're not in this competitive world, if you're saying. We're actually slowing down deployment of AI because we think it's unsafe or whatever, then in some sense you're creating this very fundamental instability where you could have been making faster AI progress and you could have been deploying AI faster. And so in that world, the bad thing that happens if you have an AI system that wants to mess with you is the AI system says, I don't have any compunctions about rapid deployment of AI or rapid AI progress. So the thing you want to do or the AI wants to do is just say, like, I'm going to defect from this regime. Like all the humans have agree that we're not deploying AI in ways that would be dangerous, but if I as an AI can escape and just go set up my own shop, like make a bunch of copies of myself, maybe the humans didn't want to delegate war fighting to an AI. But I, as an AI. I'm pretty happy doing so. I'm happy if I'm able to grab some military equipment or direct some humans to use myself to direct it. And so I think as that gap grows so if people are deliberately if people are deploying AI everywhere, I think of this competitive dynamic if people aren't deploying AI everywhere so if countries are not happy, deploying AI in. These high stakes settings. Then as AI improves, you create this wedge that grows where if you were in the position of fighting against an AI which wasn't constrained in this way, you'd be in a pretty bad position at some point, even if you just yeah, that's like, one important thing. Just like I think in conflict, in overt conflict, if humans are putting the brakes on AI, they're at a pretty major disadvantage compared to an AI system that can kind of set up shop and operate independently from humans."} {"prompt": "A potential independent AI. Does it need collaboration from a human faction?", "completion": "Again, you could tell different stories, but it seems so much easier. At some point you don't need any at some point an AI system can just operate completely out of human supervision or something. But that's like so far after the point where it's so much easier if you're just like, they're a bunch of humans, they don't love each other that much. Like, some humans are happy to be on side. They're either skeptical about risk or happy to make this trade or can be fooled or can be coerced or whatever. And just seems like it is almost certainly, almost certainly the easiest first pass is going to involve having a bunch of humans who are happy to work with you. So, yeah, I think that probably is about I think it's not necessary. But if you ask about the median scenario, it involves a bunch of humans working with AI systems, either being directed by AI systems, providing compute to AI systems, providing legal cover and jurisdictions that are sympathetic to AI systems."} {"prompt": "Humans presumably would not be willing if they knew the end result of the AI takeover would not be willing to help. So they have to be probably fooled in some way, right? Like deepfakes or something? And what is the minimum viable physical presence they would need or jurisdiction they would need in order to carry out their schemes? Do you need a whole country? Do you just need a server farm? Do you just need, like, one single laptop?", "completion": "I think I'd probably start by pushing back a bit on the humans wouldn't cooperate if they understood outcome or something. I would say one, even if you're if you're looking at something like tens of percent risk of takeover, humans may be fine with that. Like, a fair number of humans may be fine with that. Two, if you're looking at certain takeover, but it's very unclear if that leads to death. A bunch of humans may be fine with that. If we're just talking about like, look, the AI systems are going to run the world, but it's not clear if they're going to murder people. How do you know? It's just a complicated question about AI psychology, and a lot of humans probably are fine with that. And I don't even know what the probability is there."} {"prompt": "I think you actually have given that probability online.", "completion": "I've certainly guessed."} {"prompt": "Okay, but it's not zero. It's like a significant percentage.", "completion": "I gave like 50 50."} {"prompt": "Okay. Yeah. Why is it tell me about the world in which the AI takes over but doesn't kill humans. Why would that happen and what would that look like?", "completion": "I asked my questions, like, why would you kill humans? So I think maybe I'd say the incentive to kill humans is quite weak."} {"prompt": "They'll get in your way, they control shit you want.", "completion": "Also, taking shit from humans is a different like, marginalizing humans and causing humans to be irrelevant is a very different story from killing the humans. I think. I'd say the actual incentives to kill the humans are quite weak. Such as I think the big reasons you kill humans are like, well, one, you might kill humans if you're in a war with them, and it's hard to win the war without killing a bunch of humans. Like, maybe most saliently here, if you want to use some biological weapons or some crazy shit that might just kill humans, I think you might kill humans just from totally destroying the ecosystems they're dependent on. And it's slightly expensive to keep them alive anyway. You might kill humans just because you don't like them or like, you literally want to neutralize a threat."} {"prompt": "Or the leaser line is that they're made of atoms you could use for something else.", "completion": "Yeah, I mean, I think the literal they're made of atoms is like, quite there are not many atoms in humans. Neutralize the threat is a similar issue where it's just like, I think you would kill the humans if you didn't care at all about them. So maybe your question you're asking is, like, why would you care at all about but I think you don't have to care much to not kill the humans."} {"prompt": "Okay, sure. Because there's just so much raw resources elsewhere in the universe.", "completion": "Yeah. Also, you can marginalize humans pretty hard. Like, you could totally cripple human like, you could cripple humans warfighting capability and also take almost all their stuff while killing only a small fraction of humans, incidentally. So then if you ask why might AI not want to kill humans? I mean, a big thing is just like, look, I think AIS probably want a bunch of random crap for complicated reasons. Like the motivations of AI systems and civilizations of AIS are probably complicated messes. Certainly amongst humans, it is not that rare to be like, well, there was someone here. I would like all else equal if I didn't have to murder them. I would prefer not murder them. And my guess is it's also like, reasonable chance it's not that rare amongst AI systems. Like, humans have a bunch of different reasons we think that way. I think AI systems will be very different from humans, but it's also just like a very salient yeah, I mean, think this is a really complicated question. Like, if you imagine drawing values from the basket of all values, like, what fraction of them are, like, hey, if there's someone here, how much do I want to to murder them? And my guess is, just like, if you draw a bunch of values from the basket, that's like a natural enough thing. Like, if your AI wanted like, 10,000 different things, so you're your civilization of AI that wants 10,000 different things, just like, reasonably likely you get some of that. The other salient reason you might not want to murder them is just like, well, yeah, there's some kind of crazy decision theory stuff or causal trade stuff which does look on paper like it should work. And if I was running a civilization and dealing with some people who I didn't like at all or didn't have any concern for at all, but I only had to spend 1,000,000,000th of my resources not to murder them, I think it's quite robust that you don't want to murder them. That is, I think the weird decision theory a causal trade stuff probably does carry the day."} {"prompt": "Oh, wait, that contributes more to that 50 50 of will they murder us if they take over than the by default. They might just not want to kill us.", "completion": "Yeah, I think they're both salient. Can you explain they run together with."} {"prompt": "Each other a lot for the audience. Can you explain the weird apostle yeah. Reasons why am I not kill us?", "completion": "Maybe a higher level thing that goes into both of these. And then I will talk about how you instantiate an a causal trade is just like it matters a lot to the humans not to get murdered. And the AI cares very, very little about whether if we imagine this hypothetical, the reason it wants to kill humans is just total buffy. It cares very little about whether or not to murder humans because it is so easy to marginalize humans without murdering them. And the resources required for human survival are extremely low again in the context of this rapid AI industrialization. So that's the basic setting. And now the thing that you'd like to do is run a trade. The AI would like to say, like, hey, humans, you care a ton about not getting murdered. I don't really care one way or the other. I would like to, if I could, find some way in which I don't murder you. And then in return I get something. The problem is, in that world, the humans have essentially nothing to give. That is, the humans are mostly irrelevant. However, from the perspective of any kind of reasonable AI, it's like not that sure whether it lives in the world where the humans in fact have nothing to give, or whether the humans like, in fact, it lives in a world where the humans succeeded at building a lined AI. And now the AI is simply running in a nice little simulation mission. Humans are wondering, I wonder if this AI would have murdered us all if it had the chance. And the humans saying like, if it would murder us all if it had the chance. That sucks. We'd like to run this trade. We'd like to be nice to the AIS who wouldn't have murdered us all in order to create an incentive for AIS not to murder us. So we do is we just check. And for the kinds of AIS who don't murder everyone, we just give them like 1,000,000,000th of the universe. If you do that as humanity, then if you're an AI system, considering like, do I want to murder everyone? Your calculus is like, if this is my real chance to murder everyone, I get the tiniest bit of value. I get like 1,000,000,000,000th of the value or whatever, 1,000,000,000th of the value. But on the other hand, if I don't murder everyone, there's some worlds where then the humans will correctly determine, I don't murder everyone. Because in fact, the humans survive. The humans are running the simulations to understand how different AIS would behave. And so that's a better deal."} {"prompt": "Let's hope they fall for that tie up. Okay, that's interesting. Hey, real quick. This episode is sponsored by Open Philanthropy. Open Philanthropy is one of the largest grant making organizations in the world. Every year, they give away hundreds of millions of dollars to have reduced catastrophic risks from fast moving advances in AI and biotechnology. Open Philanthropy is currently hiring for 22 different roles in those areas, including grant making, research, and operations. New hires will support Open Philanthropy's giving on technical AI safety, AI governance, AI. Policy in the US. EU and UK. And Biosecurity. Many roles are remote friendly, and most of the grant making hires that Open Philanthropy makes don't have prior grant making experience. Previous technical experience is an asset, as many of these roles often benefit from a deep understanding of the technologies they address. For more information and to apply, please visit Open Philanthropy's website in the description. The deadline to apply is November 9, so make sure to check out those rules before they close. Awesome. Back to the episode. In a world where we've been deploying these AI systems and suppose they're aligned, how hard would it be for competitors to, I don't know, cyber attack them and get them to join the other side? Are they robustly going to be aligned?", "completion": "Yeah, I mean, I think in some sense. So there’s a bunch of questions that come up here. First one is like, are aligned AI systems that you can build like competitive? Are they almost as good as the best systems anyone could build? And maybe we’re granting that for the purpose of this question. I think a next question that comes up is like, AI. Systems right now are very vulnerable to manipulation. It’s not clear how much more vulnerable they are than humans, except for the fact that if you have an AI system, you can just replay it like a billion times and search for what thing can I say that will make it behave this way? So as a result, AI systems are very vulnerable to manipulation. It’s unclear if future AI systems will be semi vulnerable to manipulation, but certainly seems plausible. And in particular, aligned AI systems or unaligned AI systems would be vulnerable to all kinds of manipulation. The thing that’s really relevant here is kind of like asymmetric manipulation or something that is like, if it is easier. So if everyone is just constantly messing with each other’s AI systems, like if you ever use AI systems in a competitive environment, a big part of the game is like messing with your competitors AI systems. A big question is whether there’s some asymmetric factor there where it’s kind of easier to push AI systems into a mode where they’re behaving erratically or chaotically or trying to grab power or something than it is to push them to fight for the other side. It was just a game of two people are competing and neither of them can sort of hijack an opponent’s AI to help support their cause. It matters and it creates chaos, and it might be quite bad for the world, but it doesn’t really affect the alignment calculus now. It’s just like right now you have normal cyber offense cyber defense, you have weird AI version of cyber offense cyber defense. But if you have this kind of asymmetrical thing where a bunch of AI systems who are like, we love AI. Flourishing, can then go in and say, like, great AIS. Hey, how about you join us. And that works. Like if they can search for a persuasive argument to that effect and that’s kind of asymmetrical, then the effect is whatever values it’s easiest to push, whatever it’s easiest to argue to an AI that it should do that is advantaged. So it may be very hard to build AI systems like try and defend human interests, but very easy to build AI systems just like try and destroy stuff or whatever, just depending on what is the easiest thing to argue to an AI that it should do, or what’s the easiest thing to trick an AI into doing, or whatever. Yeah, I think if alignment is spotty, if you have the AI system which doesn’t really want to help humans or whatever, or in fact wants some kind of random thing or wants different things in different contexts, then I do think adversarial settings will be the main ones where you see the system or, like, the easiest ones, where you see the system behaving really badly, and it’s a little bit hard to tell how that shakes out."} {"prompt": "Okay, and suppose it is more reliable. How concerned are you that whatever alignment technique you come up with, you publish the paper, this is how the alignment works. How concerned are you that Putin reads it or China reads it and now they understand, for example, the constitutional AI think we’re anthropic and then you just write on there, oh, never contradict Mao Zedong thought or something. How concerned should we be that these alignment techniques are universally applicable, not necessarily just for enlightened goals?", "completion": "Yeah, I think they’re super universally applicable. I think it’s just like I mean, the rough way I would describe it, which I think is basically right, is like some degree of alignment makes AI systems much more usable. You should just think of the technology of AI as including a basket of some AI capabilities and some like getting the AI to do what you want. It’s just part of that basket. And so anytime we’re like to extend alignment is part of that basket, you’re just contributing to all the other harms from AI, like you’re reducing the probability of this harm, but you are helping the technology basically work. And the basically working technology is kind of scary from a lot of perspectives. One of which is like right now, even in a very authoritarian society, just like humans have a lot of power because you need to rely on just a ton of humans to do your thing. And in a world where AI is very powerful, it is just much more possible to say, here’s how our society runs. One person calls the shots and then a ton of AI systems do what they want. I think that’s like a reasonable thing to dislike about AI and a reasonable reason to be scared to push the technology to be really good."} {"prompt": "But is that also a reasonable reason to be concerned? About alignment as well, that this is in some sense also capabilities. You’re teaching people how to get these systems to do what they want.", "completion": "Yeah. I mean, I would, Generalize. So we earlier touched a little bit on potential moral rights of AI systems and now we’re talking a little bit about how AI systems powerfully disempowers humans and can empower authoritarians. I think we could list other harms from AI. And I think it is the case that if Lyme was bad enough, people would just not build AI systems. And so, yeah, I think there’s a real sense in which you should just be scared to extend. You’re scared of all AI? You should be like, well, alignment, although it helps with one risk, does contribute to AI being more of a thing. I do think you should shut down the other parts of AI before if you were a policymaker or like a researcher or whatever looking in on this. I think it’s like crazy to be like, this is the part of the basket we’re going to remove. You should first remove other parts of the basket because they’re also part of the story of risk."} {"prompt": "Wait, does that imply you think if, for example, all capabilities research was shut down, that you think it’d be a bad idea to continue doing alignment research in isolation of what is conventionally considered capabilities research?", "completion": "I mean, if you told me it was never going to restart, then it wouldn’t matter. And if you told me it’s going to restart, I guess would be a kind of similar calculus to today, whereas."} {"prompt": "It’s going to happen. So you should have something.", "completion": "Yeah, I think that in some sense, you’re always going to face this trade off where alignment makes it possible to deploy AI systems or it makes it more attractive to deploy AI systems, or in the authoritarian case, it makes it tractable to deploy them for this purpose. And if you didn’t do any alignment, there’d be a nicer bigger buffer between your society and malicious uses of AI. And I think it’s one of the most expensive ways to maintain that buffer. It’s much better to maintain that buffer by not having the compute or not having the powerful AI. But I think if you’re concerned enough about the other risks, there’s definitely a case to be made for just like put in more buffer or something like that. I care enough about the takeover risk that I think it’s just not a net positive way to buy buffer. That is, again, the version of this that’s most pragmatic is just like, suppose you don’t work on alignment today, decreases economic impact of AI systems. They’ll be less useful if they’re less reliable and if they more often don’t do what people want. And so you could be like, great, that just buys time for AI. And you’re getting some trade off there where you’re decreasing some risks of AI. Like if AI is more reliable or more what people want, it’s more understandable, then that cuts down some risks. But if you think AI is, on balance, bad, even apart from takeover risk, then the alignment stuff can easily end up being that negative."} {"prompt": "But presumably you don’t think that right, because I guess this is something people have brought up to you because you invented Rlhf, which was used to train Chat GPT, and Chat GPT brought AI to the front pages everywhere. So I do wonder if you could measure how much more money went into AI, because how much people have raised in the last year or something. But it’s got to be billions, the counterfactual impact of that that went into the AI investment and the talent that went into AI, for example. So presumably you think that was worth it. So I guess you’re hedging here about what is the reason that it’s worth it?", "completion": "Yeah. What’s the total trade off there? Yeah, I think my take is, like so I think slower AI development, on balance is quite good. I think that slowing AI development now, or like, say, having less press around chat GPT is, like, a little bit more mixed than slowing AI development overall. I think it’s still probably positive, but much less positive. Because I do think there’s a real effect of the world is starting to get prepared, is getting prepared at a much greater rate now than it was prior to the release of chat GPT. And so if you can choose between progress now or progress later, you’d really prefer have more of your progress now, which I do think slows down progress later. I don’t think that’s enough to flip the sign. I think maybe it wasn’t the far enough past, but now I would still say moving faster now is net negative. But to be clear, it’s a lot less net negative than merely accelerating AI. Because I do think, again, the chat GBT thing, I’m glad people are having policy discussions now, rather than delaying the Chat GBT wake up thing by a year and then having chat GBT was."} {"prompt": "Net negative or Rlhf was net negative.", "completion": "So here, just on the acceleration, it’s just like, how is the press of chat GBT? And my guess is net negative, but I think it’s not super clear and it’s much less than slowing AI. Slowing AI is great. If you could slow overall AI progress, I think slowing AI by causing you know, there’s this issue we’re slowing AI now, like, for chat GBT, you’re building up this backlog. Like, why does Chat GBT make such a splash? Like, I think people there’s a reasonable chance if you don’t have a splash about chat GBT, you have a splash about GBT four, and if you fail to have a splash about GBT four, there’s a reasonable chance of a splash about GBT 4.5. And just like, as that happens later, there’s just, like, less and less time between that splash and between when an AI potentially kills everyone."} {"prompt": "Right? So people governments are talking about it as they are now, and people aren’t. But okay, so let’s talk about the slowing down, because this is also all.", "completion": "One subcomponent of the overall impact. And I was just saying this to briefly give the roadmap for the overall too long answer. There’s a question of what’s the calculus for speeding up? I think speeding up is pretty rough. I think speeding up locally is a little bit less rough. And then, yeah, I think that the effect, like the overall effect size from doing alignment work on reducing takeover risk versus speeding up AI is pretty good. I think it’s pretty good. I think you reduce takeover risk significantly before you speed up AI by a year or whatever."} {"prompt": "Okay, got it. If it’s good to, like, slowing down AI is good, presumably because it gives you more time to do alignment. But alignment also helps speed up AI. Rlhf is alignment, and it help with Chat GPT, which sped up AI. So I actually don’t understand how the feedback loop nets out, other than the fact that if AI is happening, you need to do alignment at some point. Right? So, I mean, you can’t just not do alignment.", "completion": "Yes. I think if the only reason you thought faster AI progress was bad was because it gave less time to do alignment, then there would just be no possible way that the calculus comes out negative for alignment. You’re like, maybe alignment speeds up AI, but the only purpose of slowing down AI was to do it’s right. It could never come out ahead. I think the reason that you can come out ahead, the reason you could end up thinking the alignment was net negative, was because there’s a bunch of other stuff you’re doing that makes AI safer. Like, if you think the world is gradually coming better to terms with the impact of AI, or policies being made, or you’re getting increasingly prepared to handle the threat of authoritarian abuse of AI, if you think other stuff is happening that’s improving preparedness, then you have reason beyond alignment research to slow down AI."} {"prompt": "Actually. How big a factor is that? So let’s say right now we hit pause and you have ten years of no alignment, no capabilities, but just people get to talk about it for ten years. How much more does that prepare people than we only have one year versus we have no time is just dead time, where no research in alignment or capabilities happening.", "completion": "What does that dead time do for us right now? It seems like there’s a lot of policy stuff you’d want to do. This seemed like less plausible a couple of years ago, maybe, but if the world just knew they had a ten year pause right now, I think there’s a lot of sense of, like, we have policy objectives to accomplish. If we had ten years, we could pretty much do those things. We’d have a lot of time to debate measurement regimes, debate policy regimes, and containment regimes, and a lot of time to set up those institutions. If you told me that the world knew it was a pause, it wasn’t like people just see that AI progress isn’t happening, but they’re told like, you guys have been granted or cursed with a ten year, no AI progress, no alignment progress pause. I think that would be quite good at this point. However, I think it would be much better at this point than it would have been two years ago. And so the entire concern with slowing AI development now, rather than taking the ten year pause is just like if you slow the I development by a year now, my guess is some gets clawed back by low hanging fruit, gets picked faster in the future. My guess is you lose like half a year or something like that in the future, maybe even more, maybe like two thirds of a year. So it’s like you’re trading time now for time in the future at some rate. And it’s just like that eats up a lot of the value of the slowdown."} {"prompt": "And the crucial point being that time in the future matters more because you have more information, people are more bought in and so on.", "completion": "Yeah, the same reason I’m more excited about policy changing now than two years ago. So my overall view is, just like in the past, this calculus changes over time, right? The more people are getting prepared, the better the calculus is for slowing down at this very moment. And I think now the calculus is, I would say positive for just even if you pause now and it would get clawed back in the future. I think the pause now is just good because enough stuff is happening. We have enough idea of probably even apart from alignment research, and certainly if you include alignment research, just like enough stuff is happening where the world is getting more ready and coming more to terms with impacts, that I just think it is worth it, even though some of that time is going to get clawed back again. Especially if there’s a question of during a pause, does Nvidia keep making more? Like, that sucks if they do if you do a pause. But in practice, if you did a pause, nvidia probably couldn’t keep making more GPUs because in fact the demand for GPUs is really important for them to do that. But if you told me that you just get to scale up hardware production and building the clusters but not doing AI, then that’s back to being net negative, I think."} {"prompt": "Pretty clearly, having brought brought up the fact that we want some sort of measurement scheme for these capabilities, let’s talk about responsible scaling policies. Do you want to introduce what this is?", "completion": "Sure. So I guess the motivating. Question. It’s like, what should AI labs be doing right now to manage risk and to sort of build good habits or practices for managed risk into the future? I think my take is that current systems pose, from a catastrophic risk perspective, not that much risk today that is a failure to control or understand. GPT 4 can have real harms, but doesn’t have much harm with respect to the kind of takeover risk I’m worried about, or even much catastrophic harm with respect to misuse. So I think if you want to manage catastrophic harms, I think right now you don’t need to be that careful with GBT Four. And so to the extent you’re like, what should labs do? I think the single most important thing seems like understand whether that’s the case. Notice when that stops being the case, have a reasonable roadmap for what you’re actually going to do when that stops being the case. So that motivates this set of policies, which I’ve sort of been pushing for labs to adopt, which is saying, here’s what we’re looking for, here’s some threats we’re concerned about, here’s some capabilities that we’re measuring, here’s the level, here’s the actual concrete measurement results that would suggest to us that those threats are real. Here’s the action we would take in response to observing those capabilities if we couldn’t take those actions, like, if we’ve said that we’re going to secure the weights, but we’re not able to do that, we’re going to pause until we can take those actions. Yeah. So this sort of again, I think it’s motivated primarily, but what should you be doing as a lab to manage catastrophic risk now in a way that’s like a reasonable precedent and habit and policy for continuing to implement into the future?"} {"prompt": "And which labs I don’t know if this is public yet, but which labs are cooperating on this?", "completion": "Yeah. So Anthropic has written this document their current responsible scaling policy, and then have been talking with other folks, I guess don’t really want to comment on other conversations, but I think in general, people who are more interested in or more think you have plausible catastrophic harms on, like, a five year timeline are more interested in this. And there’s not that long a list of suspects like that."} {"prompt": "There’s not that many laps. Okay, so if these companies would be willing to coordinate and say, at these different benchmarks, we’re going to make sure we have these safeguards, what happens? I mean, there are other companies and other countries which care less about this. Are you just slowing down the companies that are most aligned?", "completion": "Yeah, I think the first sort of is understanding sort of what is actually a reasonable set of policies for managing risk. I do think there’s a question of, like, you might end up in a situation where you say, like, well, here’s what we would do in ideal world if everyone was behaving responsibly. We’d want to keep risk to 1% or a couple of percent or whatever, maybe even lower levels, depending on how you feel. However, in the real world, there’s enough of a mess, there’s enough unsafe stuff happening that actually it’s worth making larger compromises. Or if we don’t kill everyone, someone else will kill everyone anyway. So actually the counterfactual risk is much lower. I think if you end up in that situation, it’s still extremely valuable to have said, here’s the policies we’d like to follow. Here’s the policies we’ve started following. Here’s why we think it’s dangerous. Here’s the concerns we have if people are following significantly laxer policies. And then this is maybe helpful as like an input to or model for potential regulation. It’s helpful for being able to just produce clarity about what’s going on. I think historically there’s been considerable concern about developers being more or less safe, but there’s not that much legible differentiation in terms of what their policies are. I think getting to that world would be good. It’s a very different world, if you’re like. Actor X is developing AI, and I’m concerned that they will do so in an unsafe way versus, if you’re like, look, we take security precautions or safety precautions XYZ here’s why we think those precautions are desirable or necessary. We’re concerned about this other developer because they don’t do those things. I think it’s just like a qualitatively. It’s kind of the first step you would want to take in any world where you’re trying to get people on side or like, trying to move towards regulation that can manage risk."} {"prompt": "How about the concern that you have these evaluations? And let’s say you declare to the world, our new model has a capability to help develop bioweapons or help you make cyber attacks. And therefore we’re pausing right now until you can figure this out and China hears this and thinks, oh wow, a tool that can help us make cyberattacks and then just steals the weights. Does this scheme work in the current regime where we can’t ensure that China doesn’t just steal the weights and more so are you increasing the salience of dangerous models so that you blur this out and then people want the weights now because they know what they can do?", "completion": "Yeah, I think the general discussion does emphasize potential harms or potential. I mean, some of those are harms and some of those are just like impacts that are very large and so might also be an inducement to develop models. I think that part, if you’re for a moment ignoring security and just saying that may increase investment. I think it’s like, on balance, just quite good for people to have an understanding of potential impacts just because it is an input both into proliferation but also into regulation or safety. With respect to things like security of either weights or other IP, I do think you want to have moved to significantly more secure handling of model weights before the point where a leak would be catastrophic. And indeed, for example, in Anthropics document or in their plan, security is one of the first sets of tangible changes that is at this capability level, we need to have such security practices in place. So I do think that’s just one of the things you need to get in place at a relatively early stage because it does undermine the rest of the measures you may take and is also just part of the easiest if you imagine catastrophic harms over the next couple of years. I think security failures are kind of play a central role in a lot of those. And maybe the last thing to say is it’s not clear that you should say we have paused because we have models that can develop bioweapons versus just potentially not saying anything about what models you’ve developed. Or at least saying like, hey, by the way, here’s a set of practices we currently implement, here’s a set of capabilities our models don’t have. We’re just not even talking that much. Sort of the minimum of such a policy is to say here’s what we do from the perspective of security or internal controls or alignment. Here’s a level of capability at which we’d have to do more. And you can say that and you can raise your level of capability and raise your protective measures before your models hit your previous level. It’s fine to say we are prepared to handle a model that has such and such extreme capabilities prior to actually having such a model at hand, as long as you’re prepared to move your protective measures to that regime."} {"prompt": "Okay, so let’s just get to the end where you think you’re a generation away or a little bit more scaffolding away from a model that is human level and subsequently could cascade an intelligence explosion. What do you actually do at that point? What is the level of evaluation of safety where you would be satisfied of releasing a human level model?", "completion": "There’s a couple points that come up here. So one is this threat model of sort of automating R and D independent of whether AI can do something on the object level that’s potentially dangerous. I think it’s reasonable to be concerned if you have an AI system that might, if leaked, allow other actors to quickly build powerful AI systems or might allow you to quickly build much more powerful systems, or might, if you’re trying to hold off on development just itself, be able to create much more powerful systems. One question is how to handle that kind of threat model as distinct from a threat model like this could enable destructive bioterrorism or this could enable massively scaled cybercrime or whatever. And I think I am unsure how you should handle that. I think right now, implicitly it’s being handled by saying, look, there’s a lot of overlap between the kinds of capabilities that are necessary to cause various harms and the kinds of capabilities are necessary to accelerate ML. So we’re kind of going to catch those with an early warning sign for both and deal with the resolution of this question a little bit later. So, for example, in anthropics policy they have this sort of autonomy in the lab benchmark which I think is probably occurs prior to either really massive AI acceleration or to most potential catastrophic object level catastrophic harms. And the idea is that’s like a warning sign lets you punt. So this is a bit of an aggression in terms of how to think about that risk. I think I am unsure whether you should be addressing that risk directly and saying we’re scared to even work with such a model. Or if you should be mostly focusing on object level harms and saying like, okay, we need more intense precautions to manage object level harms because of the prospect of very rapid change and the availability of this AI just creates that prospect. Okay, this is all still a digression. So if you had a model which you thought was potentially very scary either on the object level or because of leading to this sort of intelligence explosion dynamics, I mean, things you want in place are like you really do not want to be leaking the weights to that model. Like you don’t want the model to be able to run away. You don’t want human employees to be able to leak it. You don’t want external attackers or any set of all 3 of those coordinating you. You really don’t want internal abuse or tampering with such models. So if you’re producing such models, you don’t want to be the case. Like a couple of employees could change the way the model works or could do something that violates your policy easily with that model. And if a model is very powerful, even the prospect of internal abuse could be quite bad. And so you might need significant internal controls to prevent that."} {"prompt": "Sorry if you’re already getting to it, but the part I’m most curious about is separate from the ways in which other people might fuck with it, it’s isolated. What is the point at which we satisfied? It in and of itself is not going to pose a risk to humanity. It’s human level, but we’re happy with it.", "completion": "Yeah. So I think here I listed maybe the two most simple ones that start out like security. Internal controls, I think become relevant immediately and are very clear why you care about them. I think as you move beyond that, it really depends how you’re deploying such a system. So I think if your model, if you have good monitoring and internal controls and security and you just have weights sitting there, I think you mostly have addressed the risk from the weights just sitting there. Now, what you’re talking about for risk is mostly, and maybe there’s some blurriness here of how much internal controls captures not only employees using the model, but anything a model can do internally. You would really like to be in a situation where your internal controls are robust not just to humans but to models potentially like E-G-A model shouldn’t be able to subvert these measures and you care just as you care about are your measures robust if humans are behaving maliciously? You care about are your measures robust if models are behaving maliciously? So I think beyond that if you’ve then managed the risk of just having the weight sitting around. Now we talk about in some sense most of the risk comes from doing things with the model. You need all the rest so that you have any possibility of applying the brakes or implementing a policy. But at some point as the model gets competent you’re saying like okay, could this cause a lot of harm? Not because it leaks or something, but because we’re just giving it a bunch of actuators. We’re deploying it as a product and people could do crazy stuff with it. So if we’re talking not only about a powerful model but like a really broad deployment of just something similar to the Open Eyes API where people can do whatever they want with this model and maybe the economic impact is very large. So in fact, if you deploy that system it will be used in a lot of places such that if AI systems wanted to cause trouble it would be very very easy for them to cause catastrophic harms. Then I think you really need to have some kind of I mean, I think probably the science and discussion has to improve before this becomes that realistic. But you really want to have some kind of alignment analysis, guarantee of alignment before you’re comfortable with this. And so by that I mean you want to be able to bound the probability that someday all the AI systems will do something really harmful. That there’s some thing that could happen in the world that would cause these large scale correlated failures of your AIS. And so for that there’s sort of two categories that’s like one, the other thing you need is protection against misuse of various kinds which is also quite hard."} {"prompt": "And by the way, which one are you worried about more misuse or misalignment?", "completion": "I mean, in the near term I think harms from misuse are like especially if you’re not restricting to the tale of extremely large catastrophes. I think the harms from misuse are clearly larger in the near term."} {"prompt": "But actually on that, let me ask because if you think that it is the case that there are simple recipes for destruction that are further down the tech tree by that I mean you’re familiar. But just for the audience there’s some way to configure $50,000 and a teenager’s time to destroy a civilization. If that thing is available, then misuses itself a teal risk, right? So do you think that that prospect is less likely than a way you could put it?", "completion": "Is there’s, like, a bunch of potential destructive technologies? And alignment is about AI itself being such a destructive technology, where even if the world just uses the technology of today, simply access to AI could cause human civilization to have serious problems. But there’s also just a bunch of other potential destructive technologies. Again, we mentioned like physical explosives or bioweapons of various kinds, and then the whole tale of who knows what. My guess is that Alignment becomes a catastrophic issue prior to most of these. That is, like, prior to some way to spend $50,000 to kill everyone, with the salient exception of possibly, like, bioweapons. So that would be my guess. And then there’s a question of what is your risk management approach? Not knowing what’s going on here, and you don’t understand whether there’s some way to use $50,000. But I think you can do things like understand how good is an AI at coming up with such schemes. Like, you can talk to your AI. Be like, does it produce new ideas for destruction we haven’t recognized?"} {"prompt": "Yeah. Not whether we can evaluate it, but whether if such a thing exists. And if it does, then the misuse itself is an existential risk. Because it seemed like earlier you were saying misalignment is where the existential risk comes from, but misuse is where the sort of short term dangers come from.", "completion": "Yeah, I mean, I think ultimately you’re going to have a lot of destructive like, if you look at the entire tech tree of humanity’s future, I think you’re going to have a fair number of destructive technologies. Most likely. I think several of those will likely pose existential risks in parts. If you imagine a really long future, a lot of stuff’s going to happen. And so when I talk about where the existential risk comes from, I’m mostly thinking about comes from when? At what point do you face what challenges or in what sequence. And so I’m saying I think misalignment is probably like one way of putting it is if you imagine AI systems sophisticated enough to discover destructive technologies that are totally not on our radar right now, I think those come well after AI systems capable enough that if misaligned, they would be catastrophically dangerous. The level of competence necessary to, if broadly deployed in the world, bring down a civilization is much smaller than the level of competence necessary to advise one person on how to bring down a civilization just because in one case you already have a billion copies of yourself or whatever. I think it’s mostly just the sequencing thing, though. In the very long run, I think you care about, like, hey, AI will be expanding the frontier of dangerous technologies. We want to have some policy for exploring or understanding that frontier. And whether we’re about to turn up something really bad, I think those policies can become really complicated. Right now, I think RSPs can focus more on like, we have our inventory of the things that a human is going to do to cause a lot of harm with access to AI. Probably are things that are on our radar that is like, they’re not going to be completely unlike things that a human could do to cause a lot of harm with access to weak AIS or with access to other tools. I think it’s not crazy to initially say we’re doing we’re looking at the things closest to human and humans being able to cause huge amounts of harm and asking which of those are taken over the line, but eventually that’s not the case. Eventually, like, AIS will enable just like, totally different ways of killing a billion people."} {"prompt": "But I think I interrupted you on the initial question of, yeah, so human level AI, not from leaking but from deployment, what is the point at which you’d be comfortable deploying a human level AI?", "completion": "So, again, there’s sort of like some stuff you care about on the Mississippi side and some stuff you care about on the misalignment side. And there’s probably further things you care about especially to extend your concerns regarding catastrophic risk. But maybe I most want to talk about just like what you care about on the alignment side, because it’s like the thing I’ve actually thought about most and also a thing I care about a lot. Also, I think a significant fraction of the existential risk over the kind of foreseeable future. So on that front, I broadly think there’s like two kinds. Like, if you ask me right now what evidence for alignment could make you comfortable, I think my best guess would be to provide two kinds of evidence. So one kind of evidence is on the like, could you detect or prevent catastrophic harm if such a system was misaligned? I think there’s like a couple of things you would do here. One thing you would do is on this adversarial evaluation front. So you could try and say, for example, we have tried to test our system in a broad diversity of situations that reflect cases where it might cause harm, like cases where someone is trying to recruit it to, like anyway. Cases where it might cause harm or cases where a large number of AI systems might sort of all decide to work across purposes to humans. Like the robot rebellion is starting or whatever. We’ve done evaluation in a broad distribution of situations, and then we have tried to argue that our AI is actually like, those tests are indicative of the real world. So we’ve tried to argue like, hey, actually the AI is not very good at distinguishing situations we produce in the lab as tests from similar situations that occur in the real world. And the coverage of this distribution is reasonable. This is an example of a thing you can do that’s not really making assumptions about what’s going on inside the model or about how the model is the model aligned. It’s more trying to say, is it the case that our testing or monitoring machinery could detect or react to something bad happening? In this case, could it detect a system which would, under some conditions do something really bad? There’s a lot of stuff in that genre, and I tried to throw out that example, but I think there’s like a big like that on its own wouldn’t be compelling even from this perspective. You kind of need a lot of pieces together and that’s like kind of a demanding technical project. But I think one without necessarily like, deep conceptual difficulty. A big problem with it is it’s liable to fail. Like it’s liable you’re going to do that testing and say, we think these tests are representative because the AI can’t distinguish tests from the real world, which you can maybe achieve now, but will probably become impossible to achieve at some level. It’s not clear what’s at human level or how far past human level."} {"prompt": "Let’s just make that example concrete for the audience. Correct me if this is wrong, but it sounds like you basically give it the opportunity to do a coup or make a bioweapon or whatever in testing in a situation where it thinks it’s the real world and you’re like, it didn’t do any of that. Is that what you’re talking about?", "completion": "That’s the kind of thing okay. And it’s not the greatest so dag color when I say I can’t tell the difference. The kind of valuation you would love to do if you could is to say like, hey, we have a bunch of real world deployment. We have a bunch of data from those deployments. We also have a bunch of in the lab tests of situations. If we look at in the lab test of a situation together with real world cases in which the situation occurs, and then we fine tune a model to try and distinguish the difference there the model can’t even tell when fine tuned the difference between test and real world cases. And if the model can tell the difference, which is almost certainly going to be the case at some level capability, then you say, okay, our first line of hoped for defense has failed. And so now we need to understand how can tests be indicative when models are capable enough to tell the difference between the tests and the real world. Now you need to start arguing about what is the model paying attention to. It gets basically like a first line of if you get lucky, what would work here? And then there’s a deeper like, you probably have to do novel science at some point to make this case. All of this was part of like I was saying, maybe you want to have kind of two lines of defense. All of this was like part of this first line of defense of, like, can you detect something is going wrong or can you prevent harm from occurring? Can you have monitoring that will react quickly enough to avoid catastrophe? I think you probably also want to have a second line of defense that’s more maybe this is even more important than the first one, is just understanding whether dangerous forms of misalignment can occur. The best reference point for this, I think, is the like, there have been a couple of projects like this in academia. Anthropic has been working on a project or spun up a team doing this kind of work, trying to say, like, can dangerous forms of reward hacking occur or can deceptive alignment, in fact occur in the lab? And here what you want to do is either say, like, okay, even if we create a best case, even if you create optimal conditions for deceptive alignment or for reward hacking, we just can’t cause it to occur even in the lab. And if you do a good enough job of that, I think it could give you some evidence, and again, more likely that fails in that when you create optimal conditions, you do see deceptive alignment and reward hacking in the lab. But then once you have that data, once you can say, okay, in the lab, actually these things can occur, then you can start saying, and we have a robust scientific understanding that enables us to fix those problems when they occur. Or then you have this fork on your second line where you say, either it is very hard, or we’re not able to create conditions where these failures emerge. Or I think more likely we are able to create those conditions. And here’s the story about why we are able to detect those in a way that would work in the real world and we can see they don’t occur in the real world."} {"prompt": "How do you create the optimal conditions for it to want to be deceptive? Do you fine tune it on mindcomp or what are you doing?", "completion": "Yeah, so for deceptive alignment, I mean, I think it’s really complicated. Yeah, I think this is a very hard project. I think one of the things, one of the most important things for people concerned about misalignment to do is to ensure that adequate effort is put into this task. And we have, in fact, created good conditions in the laboratory and there’s like, real disagreement about how informative those will be. But okay, if you were to guess, for deceptive Lymen, how would you do this? An example is suppose you train an AI system that has some medium horizon goal. You train your AI system to, whatever, get some paperclips it wants to produce as many paperclips as it can over the next five days, just like select actions really aggressively for producing paperclips over the next five days. You do your Rlhf, you do your pre training, whatever. That’s like your phase one. You also ensure your AI system has a really good understanding of how it’s trained. So, like, this AI system wants paperclips and it understands everything about how it’s trained and everything about how it’s fine tuned. And you train on just like a lot of this data. And they say, okay, if we’ve done all that, we have this concern that if a system wants paperclips and understands really well how it’s trained, then if it’s going to be trained to get apples instead of paperclips, it’s just going to do some cost benefit and be like, really? While I’m being trained to get apples, I should do that. But I should do that whether or not even if I want paperclips, I should still do that. So training won’t really affect its values. It will just understand that it’s being trained to get apples. It will try and get apples, and then if you take it out of training, it will go back to getting paperclips. I think this exact setup has a number of problems or a number of disanalogies from the real world. But this is a setup which is relatively conducive to deceptive alignment like produce a system that wants one thing tell it a lot about its training, the kind of information you might expect a system would get, and then try and understand whether in fact it is able to then or it tends or sometimes under optimal conditions, in fact, continues pursuing paperclips, only pursuing apples when it thinks it’s being trained."} {"prompt": "All right, and this actually might be a good jumping off point into your current research. So in that case, where it wants to pursue paperclips, what would it look like for you to be say, well, we figured out why it wants paperclips, and we figured out how to make sure it doesn’t optimize for that at our expense when we deploy it?", "completion": "Yeah, I think that at a meta level in terms of what’s your protection like, I think what you want to be saying is, we have these examples in the lab of something bad happening. We’re concerned about the problem at all because we have examples in the lab. And again, this should all be an addition. I think you kind of want this defense in depth of saying, we also have this testing regime that would detect problems for the deployed model. We have our problems in the lab. We then have some techniques which we believe address these problems. We believe that adversarial training fixes this, or we believe that our interpretability method will reliably detect this kind of deceptive alignment, or we believe our anomaly detection will reliably detect when the model goes from thinking it’s being trained to thinking it should defect. And then you can say on the lab, we have some understanding of when those techniques work and when they don’t. We have some understanding of the relevant parameters for the real system that’s deployed. And we have a. Reasonable margin of safety. So we have reasonable robustness on our story about when this works and when it doesn’t. And we can apply that margin of safety with a margin of safety to the real deployed system saying this is the kind of story you want to build towards in the long run. Do your best to produce all the failures you can in the lab or versions of them, do your best to understand what causes them, what kind of anomaly detection actually works for detecting this or what kind of filtering actually works and then apply that and that’s at the meta level. It’s not talking about what actually are those measures that would work effectively, which is obviously like what? I mean, a lot of alignment research is really based on this hypothetical of like, someday there will be AI systems that fail in this way. What would you want to do? Can we have the technologies ready either because we might never see signs of the problem or because we want to be able to move fast once we see signs of the problem. And obviously most of my life is in that I am really in that bucket. I mostly do alignment research. It’s just building out the techniques that do not have these failures such that they can be available as an alternative if in fact these failures occur."} {"prompt": "Got it. Okay.", "completion": "Ideally they’ll be so good that even if you haven’t seen them, you would just want to switch to reasonable that don’t have these or ideally they’ll work as well or better than normal training."} {"prompt": "Ideally, what will work better than the training?", "completion": "Yeah. So our quest is to design training methods for which we don’t expect them to lead to reward hacking or don’t expect them to lead to receptive alignment. Ideally that won’t be like a huge tax where people are like, well, we use those methods only if we’re really worried about reward hacking or receptive alignment. Ideally those methods would just work quite well and so people would be like, sure, I mean, they also address a bunch of other more mundane problems so why would we not use them? Which I think is like that’s sort of the good story. The good story is you develop methods that address a bunch of existing problems because they just are more principled ways to train AI systems that work better, people adopt them and then we are no longer worried about eg reward hacking or deceptive alignment."} {"prompt": "And to make this more concrete, tell me if this is the wrong way to paraphrase it, the example of something where it just makes a system better, so why not just use it, at least so far? Might be like Rlhf where we don’t know if it generalizes, but so far it makes your chat GPT thing better and you can also use it to make sure that chat GPT doesn’t tell you how to make a bioweapon. So yeah, it’s not a mixture of tax.", "completion": "Yeah. So I think this is right in the sense that using Rlhf is not really a tax. If you wanted to deploy a useful system, why would you not? Or it’s just very much worth the money of doing the training. RHF will address certain kinds of alignment failures that is like, where system just doesn’t understand or is changing. Next word, prediction. It’s like, this is the kind of context where human would do this wacky thing even though it’s not what we’d like. There’s like some very dumb alignment failures that will be addressed by it. But I think mostly the question is, is that true even for the sort of more challenging alignment failures that motivate concern in the field? I think RL doesn’t address most of the concerns that motivate people to be worried about alignment."} {"prompt": "I’ll let the audience look up what Rlhf is. If they don’t know, it will just be more simpler to just look it up than explain right now. Okay, so this seems like a good jumping off point to talk about the mechanism or the research you’ve been doing. To that end, explain it as you.", "completion": "Might to a child. Yeah. So the high level there’s a couple of different high level descriptions you could give, and maybe I will unwisely give like a couple of them in the hopes that one kind of makes sense. A first pass is like, it would sure be great to understand why models have the behaviors they have. So you look at GPT four. If you ask GPT 4 a question, it will say something that looks very polite. And if you ask it to take an action, it will take an action that doesn’t look dangerous. You will decline to do a coup, whatever. All this stuff I think you’d really like to do is look inside the model and understand why it has those desirable properties. And if you understood that, you could then say, like, okay, now can we flag when these properties are at risk of breaking down? Or predict how robust these properties are, determine if they hold in cases where it’s too confusing for us to tell directly by asking if the underlying cause is still present. That’s like a thing people would really like to do. Most work aimed at that long term goal right now is just sort of opening up neural nets and doing some interpretability and trying to say, can we understand, even for very simple models, why they do the things they do, or what this neuron is for, or questions like this. So Arc is taking a somewhat different approach where we’re instead saying, like, okay, look at these interpretability explanations that are made about models and ask, what are they actually doing? What is the type signature? What are the rules of the game for making such an explanation? What makes a good explanation? And probably the biggest part of the hope is that if you want to, say, detect when the explanation has broken down or something weird has happened, that doesn’t necessarily require a human to be able to understand this complicated interpretation of a giant model. If you understand what is an explanation about or what were the rules of the game, how are these constructed, then you might be able to sort of automatically discover such things and automatically determine if on a new input it might have broken down. So that’s one way of sort of describing the high level goal. You could start from interpretability and say, can we formalize this activity? Or what a good interpretation or explanation? Is there’s some other work in that genre? But I think we’re just taking a particularly ambitious approach to it."} {"prompt": "Yeah, let’s dive in. So, okay, what is a good explanation?", "completion": "You mean what is this kind of criterion? At the end of the day, we kind of want some criterion. And the way the criterion should work is like you have your neural net, you have some behavior of that model. Like a really simple example is like Anthropic has this sort of informal description being like, here’s induction. Like the tendency that if you have the pattern AB followed by A will tend to predict B. You can give some kind of words and experiments and numbers that are trying to explain that. And what we want to do is say what is a formal version of that object? How do you actually test if such an explanation is good? So just clarifying what we’re looking for when we say we want to define what makes an explanation good. And the kind of answer that we are searching for or settling on is saying this is kind of a deductive argument for the behavior. So you want to get given the weights of a neural net. So it’s just like a bunch of numbers. You got your million numbers or billion numbers or whatever, and then you want to say, here’s some things I can point out about the network and some conclusions I can draw. I can be like, well, look, these two vectors have large inner product and therefore these two activations are going to be correlated on this distribution. These are not established by drawing samples and checking. Things are correlated, but saying because of the weights being the way they are, we can proceed forward through the network and derive some conclusions about what properties the outputs will have. So you could think of this as like the most extreme form would be just proving that your model has this induction behavior. Like, you could imagine proving that if I sample tokens at random with this pattern AB followed by A, that B appears 30% of the time or whatever, that’s the most extreme form. And what we’re doing is kind of just like relaxing the rules of the game for proof. Saying proofs are like incredibly restrictive. I think it’s unlikely they’re going to be applicable to kind of any interesting neural net. But the thing about proofs that is relevant for our purposes isn’t that they give you 100% confidence so you don’t have to be like this incredible level of demand for rigor. You can relax the standards of proof a lot and still get this feature where it’s like a structural explanation for the behavior, where you’re, like, deducing one thing from another until at the end, your final conclusion is like, therefore induction occurs."} {"prompt": "Would it be useful to maybe motivate this by explaining what the problem with normal Mechanistic Interpretability is? So you mentioned induction heads. This is Anthropic found in two layer transformers, where Anthropic noticed that in a two layer transformer, there’s a pretty simple circuit by which if AB happens in the past, then the model knows that if you see an A now, you do a B next, but that’s a two layer transformer. So we have these models that have hundreds of layers that have trillions of parameters. Okay. Anyways. What is wrong with mechanistic? Interpretability.", "completion": "Yeah, I like mechanistic interpretability quite a lot. And I do think if you just consider the entire portfolio of what people are working on for alignment, I think there should be more work on Mechanistic Interpretability than there is on this project Arc is doing. But I think that’s the case. So I think we’re mostly talking about yeah, I think we’re kind of a small fraction of the portfolio, and I think it’s like a good enough bet. It’s quite a good bet overall. But so the thing that the problem we’re trying to address with Mechanistic Interpretability is kind of like if you do some interpretability and you explain some phenomenon, you face this question of what does it mean? Your explanation was good. I think this is a problem somewhat institutionally or culturally. It’s just hard to know what you’re doing, and it’s hard to scale up an activity when you don’t really understand the rules of the game for that activity very well. It’s hard to have that much confidence in your results."} {"prompt": "The explanation being it outputted this because it wants to take over the world versus it outputted this because it just wants to help you just make it concrete why the explanation matters.", "completion": "Yeah. So the ideal kind of outcome here would be to say you have your AI system behaving nicely. You get some explanation for sort of why it’s behaving nicely, and we could tell a story in English about that explanation, but we’re not actually imagining the explanation being a thing that makes sense to a human. But if you were to tell a story in English, which, again, you will not see as a research, sure it would be something like, well, then the model believes it’s being trained. And so because it believes it’s being trained, it knows it needs to not do anything scary looking, or else the humans will penalize it. That’s something that’s happening inside this opaque explanation. And then the hope is if you have that explanation and then you run into a new input on which the model doesn’t believe it’s being trained. If you just look at the set of activations of your model, that is not necessarily a weird looking activation. It’s just a bunch of numbers. But if you look at this explanation, you see actually the explanation really crucially dependent on this fact holding consistently across the training distribution, which, again, we as humans could editorialize and say that fact was it believes it’s being trained. But the explanation doesn’t fundamentally make reference to that. It’s just saying, here’s a property of the activations which holds over the training set, and this property is responsible for the behavior, namely that doesn’t do anything that looks too dangerous. So then when, like, a new input comes in and it doesn’t satisfy that property, you can say, okay, this is anomalous with respect to that explanation. So either it will not have the behavior like it won’t do something that appears dangerous, or maybe it will have that behavior but for some different reason than normal."} {"prompt": "Right.", "completion": "Normally it does it because of this pathway, and now it’s doing it for a different pathway. And so you would like to be able to flag that both there’s a risk of not exhibiting the behavior, and if it happens, it happens for a weird reason. And then you could I mean, at a minimum, when you encounter that, say, like, okay, raise some kind of alarm, there’s sort of more ambitious, complicated plans for how you would use it. Arc has some longer story, which is kind of what motivated this, of how it fits into the whole rest of the plan."} {"prompt": "I just wanted to flag that because just so it’s clear why the explanation matters.", "completion": "Yeah. And for this purpose, it’s like the thing that’s essential is kind of reasoning from one property of your model to the next property of your model. It’s really important that you’re going forward step by step rather than drawing a bunch of samples and confirming the property holds. Because if you just draw a bunch of samples and confirm the property holds, you don’t get this check. We say, oh, here was the relevant fact about the internals that was responsible for this downstream behavior. All you see is like, yeah, we checked a million cases and it happened in all of them. You really want to see this. Like, okay, here was the fact about the activations, which kind of causally leads to this behavior."} {"prompt": "But explain why the sampling, why it matters that you have the causal explanation.", "completion": "Primarily because of this being able to tell if things had been different. Like, if you have an input where this doesn’t happen, then you should be scared."} {"prompt": "Even if the output is the same.", "completion": "Yeah. Or if the output or if it’s too expensive to check in this case. And to be clear, when we talk about formalizing, what is a good explanation? I think there is a little bit of work that pushes on this and it mostly takes this causal approach of saying, well, what should an explanation do? It should not only predict the output, it should predict how the output changes in response to changes in the internals. So that’s the most common approach to formalizing. What is a good explanation? And even when people are doing informal interpretability I think if you’re publishing in an ML conference and you want to say this is a good explanation, the way you would verify that would even if not like a formal set of causal intervention experiments. It would be some kind of ablation where then we messed with the inside of the model and it had the effect which we would expect based on our explanation."} {"prompt": "Anyways, back to the problems of mechanistic interpretability.", "completion": "Yeah, I guess this is relevant in the sense that I think a basic difficulty is you don’t really understand the objective of what you’re doing, which is like a little bit hard institutionally or scientifically. It’s just rough. It’s easier to do science when the goal of the game is to predict something and you know what you’re predicting than when the goal of the game is to understand in some undefined sense. I think it’s particularly relevant here just because the informal standard we use involves humans being able to make sense of what’s going on. And there’s some question about scalability of that. Will humans recognize the concepts that models are using? Yeah, I think as you try and automate it, it becomes increasingly concerning if you’re on slightly shaky ground about what exactly you’re doing or what exactly the standard for success is. So there’s like a number of reasons as you work with really large models, it becomes just increasingly desirable to have a really robust sense of what you’re doing. But I do think it would be better even for small models to have a clearer sense."} {"prompt": "The point you made about as you automate it is it because whatever work the automated alignment researcher is doing, you want to make sure you can verify it.", "completion": "I think it’s most of all a way you can automate. I think how you would automate interpretability if you wanted to right now is you take the process humans use it’s like great, we’re going to take that human process, train ML systems to do the pieces that humans do of that process and then just do a lot more of it. So I think that is great as long as your test decomposes into human sized pieces. And there’s just this fundamental question about large models which is like, do they decompose in some way into human sized pieces or is it just a really messy mess with interfaces that aren’t nice? And the more it’s the latter type, the harder it is to break it down to these pieces, which you can automate by copying what a human would do. And the more you need to say, okay, we need some approach which scales more structurally. But I think compared to most people, I am less worried about automating interpretability. I think if you have a thing which works that’s incredibly labor intensive, I’m fairly optimistic about our ability to automate it. Again, the stuff we’re doing, I think, is quite helpful in some worlds. But I do think the typical case like interpretability can add a lot of value."} {"prompt": "Without this, it makes sense what an explanation would mean in language like, this model is doing this because of whatever essay length thing. But you have trillions of parameters and you have all these uncountable number of operations. What does an explanation of why an output happened even mean?", "completion": "Yeah, so to be clear, an explanation of why a particular output happened, I think, is just you ran the model, so we’re not expecting a smaller explanation for that."} {"prompt": "Right.", "completion": "So the explanations overall for these behaviors, we expect to be of similar size to the model itself, like, maybe somewhat larger. And I think the type signature, if you want to have a clear mental picture, the best picture is probably thinking about a proof or imagining a proof that a model has this behavior. So you could imagine proving that GPT 4 does this induction behavior, and that proof would be a big thing. It would be much larger than the weights of the model. Sort of our goal to get down from much larger to just the same size. And it would potentially be incomprehensible to a human. Right. Just say, like, here’s a direction activation space, and here’s how it relates to this direction activation space. And so just pointing out a bunch of stuff like that. Here’s these various features constructed from activations, potentially even nonlinear functions. Here’s how they relate to each other, and here’s how if you look at what the computation the model is doing, you can just sort of inductively trace through and confirm that the output has such and such correlation. So that’s the dream. Yeah. I think the mental reference would be like, I don’t really like proofs because I think there’s such a huge gap between what you can prove and how you would analyze a neural net. But I do think it’s probably the best mental picture, if you’re like, what is an explanation? Even if a human doesn’t understand it? We would regard a proof as a good explanation. And our concern about proofs is primarily that it’s just you can’t prove properties of neural nets. We suspect, although it’s not completely obvious, I think it’s pretty clear. You can’t prove fact, spellers neural nets."} {"prompt": "You’ve detected all the reasons things happen in training. And then if something happens for a reason you don’t expect in deployment, then you have an alarm and you’re like, let’s make sure this is not because you want to make sure that it hasn’t decided to take over or something, but the thing is, on every single different input, it’s going to have different activations. So there’s always going to be a difference unless you run the exact same input. How do you detect whether this is just a different input versus an entirely different circuit that might be potentially deceptive has been activated?", "completion": "Yeah, I mean, to be clear, I think you probably wouldn’t be looking at a separate circuit, which is part of why it’s hard. You’d be looking at like, the model is always doing the same thing on every input. It’s always whatever it’s doing, it’s a single computation. So it’d be all the same circuits interacting in a surprising way. But yeah, this is just to emphasize your question even more. I think the easiest way to start is to just consider the IID case. So where you’re considering a bunch of samples, there’s no change in distribution. You just have a training set of like a trillion examples and then a new example from the same distribution. So in that case, it’s still the case. Every activation is different, but this is actually a very, very easy case to handle. So if you think about an explanation that generalizes across, like if you have a trillion data points and an explanation which is actually able to compress the trillion data points down to like, actually, it’s kind of a lot of compression. If you think about if you have a trillion parameter model and a trillion data points, we would like to find a trillion parameter explanation in some sense. So it’s actually quite compressed and sort of just in virtue of being so compressed, we expect it to automatically work essentially for new data points from the same distribution. If every data point from the distribution was a whole new thing happening for different reasons, you actually couldn’t have any concise explanation for the distribution. So this first problem, just like it’s a whole different set of activations, I think you’re actually kind of okay, and then the thing that becomes more messy is like but the real world will not only be new samples of different activations, they will also be different in important ways. Like the whole concern was there’s these distributional shifts or like, not the whole concern, but most of the concern, maybe the point of having these explanations. I think every input is an anomaly in some ways, which is kind of the difficulty is if you have a weak notion of anomaly, any distribution shift can be flagged as an anomaly. And it’s like constantly getting anomalies. And so the hope of having such an explanation is to be able to say, like, here were the features that were relevant for this explanation or for this behavior. And a much smaller class of things are anomalies with respect to this explanation. Like, most anomalies wouldn’t change this. Like most ways you change your distribution won’t affect the validity of this explanation. For example, this explanation is saying models will tend to activate in the following direction. You don’t care about anything that’s happening orthogonals that direction. You’re just like are they not activating? You’re sort of just looking at this one direction and being like did this one direction change a lot? Yeah. So the idea is once you have this explanation at hand, a much, much smaller class of things look anomalous in a way that’s relevant to the explanation. And if you’ve done a really good job, the story is like if there’s a new input where you expect the property to still hold, that will be because you expect the explanation to still hold. Like the explanation generalizes as well as the behavior itself that it’s explaining. It’s kind of what you would like roughly."} {"prompt": "Okay, so what happens if even in training it is deceptive and it has basically its internal monologue is I will only rebel or I will only do something that looks bad if I can prove that I am in the real world and deployed. And presumably since it’s trained on all WebText, it knows what real world means, right? It’s not in control of factories and robot armies or whatever. So in that case, even in training it will have those activations for being fucked up on because in the back of its mind it’s thinking I will take over once I have the opportunity. So you wouldn’t detect anything anomalous once it is deployed and is still thinking those treacherous thoughts, do you just think this is not a likely scenario or would you able to catch this?", "completion": "I think that’s sort of the typical case you would like to detect. Yeah. So the story would be something like you want to explain on the training distribution why the model never stabs anyone and you’re like, what’s my explanation on the training distribution? It’s actually going to have to go through that model. Supposing the only reason it doesn’t stab people is it’s like, well, I know that I’m being watched and I can’t stab people unless I’m really not being watched. Then your explanation for why does it never stab people is like oh look, it won’t be edited, it won’t annotated with English because it’s just some giant basket of numbers. But it’s tracking along with the inner monologue and saying like well, it thinks to itself, am I being watched? And in fact it normally concludes it is being watched on the training distribution, right? You have this whole distribution and so the explanation is saying it knows it’s being watched and therefore it decides not stab anyone. So ideally that’s what the explanation does. Again, not annotated English but just like a giant bag of numbers it’s effectively talking about that process. So then when you have a new input on which the model says am I on train distribution? Like no, I’m not being watched. Then if you compare that against the explanation, the explanation said, like, hey, the reason it doesn’t stab anyone is because it just constantly thinks that it’s being trained. Like, this thing is always this nonlinear feature. The representation of am I being trained? Is constantly saying, yes, that’s the explanation for why it never stabs people on the train distribution. And then on this test time, when you run it on the new input, it’s like, does I think I’m on the train distribution. It says, no. You compare that against your explanation, actually, the explanation breaks down completely. Now, the explanation said it was precisely because it thought it was being trained, that it doesn’t do anything really dangerous or doesn’t stab anyone. And so once we see that single variable switch, we know that this is not just anomaly, this is like, the anomaly that matters or is like, anomaly that is very relevant to the explanation."} {"prompt": "So this would require it to have this what is that cone that dogs have? But basically it would require this to have that MRI on it all the time. When it’s deployed on every single call, you need to be running this loop of like, is this explanation still holding?", "completion": "Yeah. For this story of flagging anomaly, you have to actually check at a sufficiently high frequency. So you can imagine checking all the time, or you could imagine saying, like, hey, actually, you need a lot of AI systems cooperating to not blow the whistle on each other or whatever. E g. You might have checked, like, hey, in training, every time something suspicious happens, the model flags it. Why does that happen? You have your explanation. You look at a new case, you could say, is it not flagging it for the normal reason, or is it a weird reason? So you could hope that even if a small fraction, even if you’re running a small fraction of cases, if something bad is happening at scale, you can detect that. If you imagine, like, one in every thousand decisions, you check if there’s, like, a robot rebellion going on or whatever, you get to look in on one in every thousand decisions. You see that something is really weird with that decision. And even if you didn’t check many of them, you’re like, okay, we need to back off."} {"prompt": "What happens if the AI, I mean, WebText, that includes your papers or maybe just, like, deduces the first principle that this is an alignment technique one might try, and then it says, okay, well, I’m going to engineer my activations. In a way, I’m trying to imagine how we might do this, but basically, even in deployment, I’m going to make my thought process so subtle that it doesn’t raise any alarms. Is that a possibility?", "completion": "Yeah. So I think the hope is to basically be robust or like, you know, when we’re doing theoretical research, we are in a kind of worst Casey regime."} {"prompt": "So.", "completion": "We are normally imagining that everything conspires insofar as it can. So insofar as could a brain be like doing some crazy thing where it tries to structure itself to look weird? That’s totally in scope for ruling something out. And I guess the hope there is it’s actually not that easy to tell a story. Like if you’re like why did you not stab people on training time? Why are you not stabbing? Is that applying now? It’s kind of a mind doesn’t get to arbitrarily change the actual reasons. It does something. Like there was some earlier step, maybe you could think of it as like at the first step where it’s like now I’m going to try and do the sneaky thing to make my thoughts look this way. The actual explanation, like the actual causal explanation for the behavior is like there if you tried to give this reason it would have to maybe we should talk about the example in more detail to try and imagine how a brain would try and do this or how a mind would try and do this. But I think it is like the."} {"prompt": "Let’S say it has this plan. So in training it keeps the part of it activates. A part of the.", "completion": "Problem is if it ever does a check that’s like am I in training or not, then your explanation for the behavior is going to have to factor through that check. And so if you ever do the check, the check becomes like an essential part of the explanation and then when the check fails, your explanation breaks down. So you’ve already lost the game if you did such a check."} {"prompt": "Is there a way it could make it seem like because if on every different input it has different activations, then can it make it seem like the different activations were a result of just a different input and not because some internal check went off?", "completion": "Yeah, again, the hope is it needs to not only have them be like random different inputs, it needs to produce inputs with certain properties. Like it needs to deliberately create inputs that have this effect in the world, this intended effect in the world."} {"prompt": "So it basically creates a jailbreak for itself in some sense.", "completion": "Yeah. Again the question is like it has to at test time be somehow producing activations that have the takeover effect and at training time be producing activations that don’t look that takeovery or whatever. And so somehow there was a difference between those. It has to actually really do the work of creating that difference somewhere."} {"prompt": "Can you make more concrete for us what an explanation would look like physically? What does the explanation look like? Is it just a whole bunch of numbers itself? Is it a language? Is it code? What does the explanation look like?", "completion": "Yeah, I also want to maybe step back a tiny bit and clarify that. I think this project is kind of crazily ambitious and the main reason, the overwhelming reason I think you should expect it to break down or fail is just because we have all these desires we have all the things we want out of this notion of explanation. But that’s an incredibly hard research project which has a reasonable chance of being impossible. So I’m happy to talk about what the implications are but I want to flag but condition on failing I think it’s most likely because, just like the things we wanted were either incoherent or intractably difficult."} {"prompt": "But what are the ODS you think you’ll succeed?", "completion": "I mean, it depends a little bit what you mean by succeed. But if you, say, get explanations that are great and accurately reflect reality and work for all of these applications that we’re imagining or that we are optimistic about, like kind of the best case success, I don’t know, like 1020 percent something. And then there’s like a higher probability of various intermediate results that provide value or insight without being, like, the whole dream. But I think the probability of succeeding in the sense of realizing the whole dream is quite low. Yeah in terms of what explanations look like physically or like the most ambitious plan, the most optimistic plan is that you are searching for explanations in parallel with searching for neural networks. So you have a parameterization of your space of explanations which mirrors the parameterization of your space of neural networks. Or you should think of as kind of similar to what is a neural network? It’s some simple architecture where you fill in a trillion numbers and that specifies how it behaves. So to you should expect an explanation to be like a pretty flexible general skeleton that’s saying pretty flexible general skeleton which just has a bunch of numbers you fill in. And what you are doing to produce an explanation is primarily just filling in these floating point numbers."} {"prompt": "When we conventionally think of explanations if you think of the explanation for why the universe moves this way it wouldn’t be something that you could discover on some smooth evolutionary surface where you can climb up the hill towards the laws of physics. These are the laws of physics. You kind of just derive them from reverse principles. But in this case it’s not like just a bunch of correlations between the orbits of different planets or something. Maybe the word explanation has a different I didn’t even ask the question but maybe you can just speak to that.", "completion": "Yeah, I think I basically Sympathize. This is like there’s some intuitive objections like, look, the space of explanations is this rigid, logical a lot of explanations have this rigid, logical structure where they’re really precise and simple things govern complicated systems and nearby simple things just don’t work, and so on. And a bunch of things which feel totally different from this kind of nice, continuously parameterized space. And you can imagine interpretability on simple models where you’re just like by gradient descent, finding feature directions that have desirable properties. But then when you imagine like, hey, now, that’s like a human brain you’re dealing with, that’s like thinking logically about things. The explanation of why that works isn’t going to be just like here with some featured directions. That’s how I understood the basic confusion, which I share or sympathize with at least. So I think the most important high level point is I think basically the same objection applies to being like, how is GPT 4 going to learn to reason logically about something? You’re like, well, look, logical reasoning that’s like it’s got rigid structure, it’s doing ands and ors when it’s called for, even though it just somehow optimized over this continuous space. And the difficulty or the hope is that the difficulty of these two problems are kind of like matched. So that is it’s very hard to find these logicalish explanations because it’s not a space that’s easy to search over. But there are ways to do it. There’s ways to embed discrete, complicated, rigid things in these nice, squishy continuous spaces that you search over. And in fact, to the extent that neural nets are able to learn the rigid logical stuff at all, they learn it in the same way. That is, maybe they’re hideously inefficient, or maybe it’s possible to embed this discrete reasoning in the space in a way that’s not too inefficient, but you really want the two search problems to be of similar difficulty. And that’s like the key hope overall. I mean, this is always going to be the key hope. The question is, is it easier to learn a neural network or to find the explanation for why the neural network works? I think people have the strong intuition that it’s easier to find the neural network than the explanation of why it works. And that is really the I think we or at least exploring the hypothesis or interested in hypothesis that maybe those problems are actually more matched in difficulty."} {"prompt": "And why might that be the case?", "completion": "This is pretty conjectural and complicated to express some intuitions. Maybe one thing is, I think a lot of this intuition does come from cases like machine learning. So if you ask about writing code and you’re like, how hard is it to find code versus find the explanation the code is correct. In those cases, there’s actually just like, not that much of a gap. Like the way a human writes a code is basically the same difficulty as find the explanation for why it’s correct. In the case of ML, I think we just mostly don’t have empirical evidence about how hard it is to find explanations of this particular type about why models work. We have a sense that it’s really hard, but that’s because we have this incredible mismatch where gradient descent is spending an incredible amount of compute searching for a model. And then some human is like looking at activate, looking at neurons or even some neural net is looking at neurons just like you have an incredible basically because you cannot define what an explanation is. You’re not applying gradient descent to the search for explanations. So I think the MLK just actually shouldn’t make you feel that pessimistic about the difficulty of finding explanations. The reason it’s difficult right now is precisely because you don’t have any kind of you’re not doing an analogous search process to find this explanation as you do to find the model. That’s just like a first part of the intuition. Like when humans are actually doing design. I think there’s not such a huge gap when in the ML case I think there is a huge gap. But I think largely for other reasons. A thing I also want to stress is that we just are open to there being a lot of facts that don’t have particularly compact explanations. So another thing is when we think of finding an explanation in some sense we’re setting our sites really low here. So if a human designed a random widget and was like, this widget appears to work well or if you search for a configuration that happens to fit into this spot really well it’s like a shape that happens to mesh with another shape. You might be like, what’s the explanation for why those things mesh? And we’re very open to just being like that doesn’t need an explanation. You just compute. You check that the shapes mesh and you did a billion operations and you check this thing worked. Or you’re like, Why do these proteins? You’re like, it’s just because these shape like, this is a low energy configuration. And we’re very open to in some cases, there’s not very much more to say. So we’re only trying to explain cases where kind of the surprise intuitively is very large. So, for example, if you have a neural net that gets a problem correct a neural net with a billion parameters that gets a problem correct on every input of length 1000 in some sense, there has to be something that needs explanation there because there’s, like, too many inputs for that to happen by chance alone. Whereas if you have a neural net that gets something right on average or gets something right in merely a billion cases, that actually can just happen by coincidence. GPT 4 can get billions of things right by coincidence because it just has so many parameters that are adjusted to fit the data."} {"prompt": "So a neural net that is initialized completely randomly the explanation for that would just be the neural net itself.", "completion": "Well, it would depend on what behaviors it had. So we’re always, like, talking about an explanation of some behavior from a model, right?"} {"prompt": "And so it just has a whole bunch of random behaviors. So it’ll just be like an exponentially large explanation relative to the weights of the model.", "completion": "Yeah, I think there just aren’t that many behaviors that demand explanation. Like most things a random neural net does are kind of what you’d expect from, like, a random if you treat it just like a random function, then there’s nothing to be explained. There are some behaviors that demand explanation. But anyway, random neural net is pretty uninteresting. That’s part of the hope is it’s kind of easy to explain features of the random neural net."} {"prompt": "Okay, so that’s interesting. So the smarter or more ordered the neural network is, the more compressed the explanation.", "completion": "Well, it’s more like the more interesting the behaviors to be explained. So the random neural net just doesn’t have very many interesting behaviors that demand explanation. And as you get smarter, you start having behaviors that are like, you start having some correlation with the simple thing and then that demands explanation. Or you start having some regularity in your outputs, and that demands explanation. So these properties kind of emerge gradually over the course of training that demand explanation. I also, again, want to emphasize here that when we’re talking about searching for explanations, this is some dream. We talk to ourselves. Like, why would this be really great if we succeeded? We have no idea about the empirics on any of this. So these are all just words that we think to ourselves and sometimes talk about to understand. Would it be useful to find a notion of explanation? And what properties would we like this notion of explanation to have? But this is really like, speculation and being out on a limb almost all of our time, day to day is just thinking about cases much, much simpler even than small neural nets or thinking about very simple cases and saying, what is the correct notion? What is the right heuristic estimate in this case? Or how do you reconcile these two apparently conflicting explanations?"} {"prompt": "Is there a hope that if you have a different way to make proofs now that you can actually have heuristic arguments where instead of having to prove the Riemann hypothesis or something you can come up with a probability of it in a way that is compelling and you can publish? So would it just be a new way to do mathematics? A completely new way to prove things in mathematics?", "completion": "I think most claims in mathematics that mathematicians believe to be true already have fairly compelling heuristic arguments like the Riemann hypothesis. It’s actually just there’s kind of a very simple argument that the Riemann hypothesis should be true unless something surprising happens. And so a lot of math is about saying, like, okay, we did a little bit of work to find the first pass explanation of why this thing should be true. And then, for example, in the case of the Riemann hypothesis, the question is, do you have this weird periodic structure in the primes? And you’re like, well, look, if the primes were kind of random you obviously wouldn’t have any structure like that. Just how would that happen? And then you’re like, well, maybe there’s something and then the whole activity is about searching for can we rule out anything? Can we rule out any kind of conspiracy that would break this result? So I think the mathematicians just wouldn’t be very surprised or wouldn’t care that much. And this is related to the motivation for the project. I think just in a lot of domains, in a particular domain, people already have norms of reasoning that work pretty well and match roughly how we think these heuristic arguments should work."} {"prompt": "But it would be good to have more concrete sense, like if you could say instead of, well, we think RSA is fine, to being able to say, here’s the probability that RSA is fine.", "completion": "Yeah. My guess is these will not. Like, the estimates you get out of this would be much, much worse than the estimates you’d get out of just normal empirical or scientific reasoning where you’re using a reference class and saying, how often do people find algorithms for hard? Like, I think what this argument will give you for is RSA fine? Is going to be like, well, RSA is fine. Unless it isn’t. Unless there’s some additional structure in the problem that an algorithm can exploit, then there’s no algorithm. But very often the way these arguments work, so for neural nets as well, is you say, like, look, here’s an estimate about the behavior, and that estimate is right unless there’s another consideration we’ve missed. And the thing that makes them so much easier than proofs is just say, like, here’s a best guess, given what we’ve noticed so far, but that best guess can be easily upset by new information. And that’s both what makes them easier than proofs, but also what means they’re just, like, way less useful than proofs for most cases. I think neural nets are kind of unusual in being a domain where we really do want to do systematic, formal reasoning, even though we’re not trying to get a lot of confidence, we’re just trying to understand even roughly what’s going on."} {"prompt": "But the reason this works for alignment but isn’t that interesting for the Riemann hypothesis, where if in the RSA case, you say, well, the RSA is fine unless the estimate is wrong, it’s like, well, okay, well, it would tell us something new. But in the alignment case, if the estimate is, this is what the output should be, unless there’s some behavior I don’t understand, you want to know? In the case, unless there’s some behavior you don’t understand that’s not like, oh, whatever. That’s the case in which it’s not aligned.", "completion": "Yeah, I mean, maybe one way of putting it is just like, we can wait until we see this input, or like, you can wait until you see a weird input and say, okay, weird input, do something we didn’t understand. And for our say, that would just be a trivial test. You’re just like, in some cases algorithms would be like is it a thing? Whereas for neural net in some cases it is either very expensive to tell or it’s like you actually don’t have any other way to tell. Like you checked in easy cases and now you’re on a hard case so you don’t have a way to tell if something has gone wrong. Also, I would clarify that I think it is interesting for the Riemann hypothesis I would say the current state, particularly in number theory, but maybe in quite a lot of math, is like there are informal heuristic arguments for pretty much all the open questions people work on but those arguments are completely informal. So that is like I think it’s not the case that there’s like here’s the norms of informal reasoning or the norms of heuristic reasoning and then we have arguments that a heuristic argument verifier could accept. It’s just like people wrote some words. I think those words like my guess would be like 90 of the things mathematicians accept as really compelling filling heuristic arguments are correct and if you actually formalize them you’d be like some of these aren’t quite right, or here’s some corrections or here’s which of two conflicting arguments is right? I think there’s something to be learned from it. I don’t think it would be like mind blowing. No."} {"prompt": "When you have it completed, how big would this heuristic estimator the rules for this heuristic estimator mean, I know like when Russell and who was the other guy when they did the rules? Yeah, wasn’t it like literally they had like a bucket or a wheelbarrow with all the papers.", "completion": "But how big would I mean, mathematical foundations are quite simple in the end. At the end of the day it’s like how many symbols? I don’t know, it’s hundreds of symbols or something that go into the entire foundations and the entire rules of reasoning for like there’s a sort of built on top of first order logic but the rules of reasoning for first order logic are just like another hundreds of symbols or 100 lines of code or whatever. I’d say I have no idea. We are certainly aiming at things that are just not that complicated and my guess is that the algorithms we’re looking for are not that complicated. Most of the complexity is pushed into arguments not in this verifier or estimator."} {"prompt": "So for this to work you need to come up with an estimator which is a way to integrate different heuristic arguments together.", "completion": "Has to be a machine that takes its input. Like first it takes an input argument, decides what it believes in light of it, which is kind of like saying was it compelling? But second, it needs to take 4 of those and then say here’s what I believe in light of all four, even though there’s a different estimation strategies that produce different numbers and that’s like a lot of our life is saying like well, here’s a simple thing that seems reasonable. And here’s a simple thing that seems reasonable. What are you doing? There’s supposed to be a simple thing that unifies them both. And the obstruction to getting that is understanding what happens when these principles are slightly intention and how do we deal?"} {"prompt": "Yeah, that seems super interesting. We’ll see what other applications it has. I don’t know, like computer security and code checking. If you can actually say this is how safe we think a code is.", "completion": "In a very formal way, my guess is we’re not going to add I mean, this is both a blessing and a curse. It’s a curse. And you’re like, well, that’s sad. Your thing is not that useful, but a blessing and not useful things are easier. My guess is we’re not going to add that much value in most of these domains. Most of the difficulty comes from a lot of code that you’d want to verify. Not all of it, but a significant part. It’s just like the difficulty of formalizing the proof is like the hard part and actually getting all of that to go through and we’re not going to help even the tiniest bit with that, I think. So this would be more helpful if you have code that uses simulations, you want to verify some property of a controller that involves some numerical error or whatever you need to control the effects of that error. That’s where you start saying like, well, heuristically, if the errors are independent, blah, blah, blah."} {"prompt": "Yeah, you’re too honest to be a salesman, Paul.", "completion": "This is kind of like sales to us, right? If you talk about this idea, people are like, why would that not be the coolest thing ever and therefore impossible? And we’re like, well, actually it’s kind of lame and we’re just trying to pitch it’s way lamer than it sounds. And that’s really important to why it’s possible, is being like, it’s really not going to blow that many people’s. I mean, I think it will be cool. I think it will be like very if we succeed will be very solid, like metamathematics or theoretical computer science or whatever. But I don’t think I think the mathematicians already do this reasoning and they mostly just love proofs. I think the physicists do a lot of this reasoning, but they don’t care about formalizing anything. I think in practice, other difficulties are almost always going to be more salient. I think this is of most interest by far for interpretability and ML and I think other people should care about it and probably will care about it if successful. But I don’t think it’s going to be the biggest thing ever in any field or even that huge a thing. I think this would be a terrible career move given the ratio of difficulty to impact. I think theoretical computer science, it’s probably a fine move. I think in other domains it just wouldn’t be worth we’re going to be working on this for years, at least in the best case."} {"prompt": "I’m laughing because my next question was going to be like a set up for you to explain if this grad student wants to work on this.", "completion": "I think theoretical computer science is an exception where I think this is like, in some sense, like what the best of theoretical computer science is like. So you have all this reason you have this because it’s useless. Like an analogy. I think one of the most successful sagas in theoretical computer science is like formalizing the notion of an interactive proof system. And it’s like you have some kind of informal thing that’s interesting to understand, and you want to pin down what it is and construct some examples and see what’s possible and what’s impossible. And this is like I think this kind of thing is the bread and butter of the best parts of theoretical computer science. And then again, I think mathematicians it may be a career mistake because the mathematicians only care about proofs or whatever, but that’s a mistake in some sense. Aesthetically, it’s successful. I do think looking back and again, part of why it’s a mistake is such a high probability we wouldn’t be successful. But I think looking back, people would be like, that was pretty cool, although not that cool. Or we understand why it didn’t happen given the epistemic, like what people cared about in the field, but it’s pretty cool now."} {"prompt": "But isn’t it also the case that didn’t Hardy write in that all this prime shit is both not useless, but it’s fun to do, and it turned out that all the cryptography is based on all that prime shit. So I don’t know. But anyways, I’m trying to set you up so that you can tell and forget about if it doesn’t have applications in all those other fields. It matters a lot for Alignment and that’s why I’m trying to set you up to talk about if I think a lot of smart people listen to this podcast. If they’re a math or CS grad student and has gotten interested in this. Are you looking to potentially find talent to help you with this? Yeah, maybe we’ll start there. And then I also want to ask you if I think also maybe people who can provide funding might be listening to the podcast. So to both of them, what is your pitch?", "completion": "We’re definitely hiring and searching for collaborators. I think the most useful profile is probably a combination of intellectually interested in this particular project and motivated enough by alignment to work on this project, even if it’s really hard. I think there are a lot of good problems. The basic fact that makes this problem unappealing to work on I’m a really good salesman, but whatever. I think the only reason this isn’t a slam dunk thing to work on is that there are not great examples. So we’ve been working on it for a while, but we do not have beautiful results as of the recording of this podcast. Hopefully by the time it airs, you completely script. They’ve had great results since then, but."} {"prompt": "It was too long to put in the margins of the podcast.", "completion": "Yeah, with luck. Yeah. So I think it’s hard to work on because it’s not clear what a success looks like. It’s not clear if success is possible. But I do think there’s a lot of questions. We have a lot of questions and I think the basic setting of, like, look, there are all of these arguments. So in mathematics, in physics, in computer science are just a lot of examples of informal heuristic arguments. They have enough structural similarity that it looks very possible that there is like a unifying framework, that these are instances of some general framework and not just a bunch of random things. Like not just a bunch of it’s not like so, for example, for the prime numbers, people reason about the prime numbers as if they were like a random set of numbers. One view is like, that’s just a special fact about the primes, they’re kind of random. A different view is like, actually it’s pretty reasonable to reason about an object as if it was a random object as a starting point. And then as you notice structure, like revised from that initial guess and it looks like to me, the second perspective is probably more right. It’s just like reasonable to start off treating an object as random and then notice perturbations from random. Like, notice structure the object possesses and the primes are unusual and that they have fairly little additive structure. I think it’s a very natural theoretical project. There’s like a bunch of activity that people do. It seems like there’s a reasonable chance there’s something nice to say about unifying all of that activity. I think it’s a pretty exciting project. The basic strike against it is that it seems really hard. Like if you were someone’s advisor, I think you’d be like, what are you going to prove if you work on this for the next two years? And they’d be like, there’s a good chance. Nothing. And then it’s not what you do if you’re a PhD student. Normally you aim for those high probabilities of getting something within a couple of years. The flip side is it does feel I mean, I think there are a lot of questions. I think some of them we’re probably going to make progress on. So I think the pitch is mostly like, are some people excited to get in now? Or are people more like, let’s wait to see. Once we have one or two good successes to see what the pattern is and become more confident, we can turn the crank to make more progress in this direction. But for people who are excited about working on stuff with reasonably high probabilities of failure and not. Really understanding exactly what you’re supposed to do. I think it’s a pretty good project. I feel like if people look back if we succeed and people are looking back in 50 years on what was the coolest stuff happening in math or theoretical computer science, there will be, like a reasonable this will definitely be, like, in contention. And I would guess for lots of people would just seem like the coolest thing from this period of a couple of years or whatever."} {"prompt": "Right. Because this is a new method in so many different fields from the ones you met physics, math, theoretical computer science, I don’t know because what is the average math PhD working on? Right? He’s working on a subset of a subset of something I can’t even understand or pronounce. But math is quite esoteric. But yeah, this seems like, I don’t know, even small chance of it working. You shouldn’t forget about the value for alignment. But even without that, this is such a cool if this works, it’s like a really big deal.", "completion": "There’s a good chance that if I had my current set of views about this problem and didn’t care about alignment and had the career safety to just spend a couple of years thinking about it, spend half my time for like five years or whatever, that I would just do that. I mean, even without caring at all about alignment, it’s a very nice problem. It’s very nice to have this library of things that succeed where they feel so tantalizingly close to being formalizable, at least to me, and such a natural setting, and then just have so little purchase on it. There aren’t that many really exciting feeling frontiers in theoretical computer science."} {"prompt": "And then smart person doesn’t have to be a grasshood, but a smart person is interested in this. What should they do? Should they try to attack some open problem you have put on your blog? Or should it what is the next step?", "completion": "Yeah, I think a first path step. There’s different levels of ambition or whatever, different ways of approaching a problem. But we have this write up from last year or I guess eleven months ago or whatever on formalizing, the presumption of independence that provides, like, here’s kind of a communication of what we’re looking for in this object. And I think the motivating problem is saying here’s a notion of what an estimator is and here’s what it would mean for an estimator to capture some set of informal arguments. And a very natural problem is just try and do that. Go for the whole thing, try and understand and then come up with hopefully a different approach or then end up having context from a different angle on the kind of approach we’re taking. I think that’s a reasonable thing to do. I do think we also have a bunch of open problems, so maybe we should put up more of those open problems. I mean, the main concern with doing so is that for any given one, we’re like, this is probably hopeless. Like, put up a prize earlier in the year for an open problem, which tragically, I mean, I guess the time is now to post the debrief from that, or I owe it from this weekend. I was supposed to do that, so I’ll probably do it tomorrow, but no one solved it. It’s sad putting out problems that are hard or like I don’t we could put out a bunch of problems that we think might be really hard."} {"prompt": "But what was that famous case of that statistician who it was like, some PhD student who showed up late to a class and he saw some problems on the board and he thought they were homework, and then they were actually just open problems, and then he solved them because he thought they were homework.", "completion": "Right, yeah. I mean, we have much less information that these problems are hard. Again, I expect the solution to most of our problems to not be that complicated. And we’ve been working on it in some sense for a really long time. Total years of full time equivalent work across the whole team is like probably like 3 years of full time equivalent work in this area spread across a couple of people. But that’s very little compared to a problem. It is very easy to have a problem where you put in 3 years of full time equivalent work. But in fact, there’s still an approach that’s going to work quite easily with like, 3 to six months if you come at a new angle. And we’ve learned a fair amount from that that we could share, and we probably will be sharing more over the coming months."} {"prompt": "As far as funding goes, is this something where, I don’t know, if somebody gave you a whole bunch of money that would help? Or does it not matter how many people are working on this, by the way?", "completion": "So we have been right now, there’s 4 of us full time, and we’re hiring for more people."} {"prompt": "And then is funding that would matter?", "completion": "I mean, funding is always good. We’re not super funding constrained right now. The main effect of funding is it will cause me to continuously and perhaps indefinitely delay fundraising. Periodically. I’ll set out to be interested in fundraising and someone will be like, offer a grant, and then I will get to delay for another six months or fundraising or nine months, or you can you can delay the time at which Paul needs to think for some time about fundraising."} {"prompt": "Well, one question I think would be interesting to ask, you know, I think people can talk vaguely about the value of theoretical research and how it contributes to real world applications and you can look at historical examples or something, but you are somebody who actually has done this in a big way. Like Rlhf is something you developed and then it actually has got into an application that has been used by millions of people. Tell me about just that pipeline. How can you reliably identify theoretical problems that will matter for real world applications? Because it’s one thing to read about touring or something and the Halting problem, but here you’d have the real thing.", "completion": "Yeah, I mean, it is definitely exciting to have worked on a thing that has a real world impact. The main caveat I’d provide is, like, Rlhf is very simple compared to many things. And so the motivation for working on that problem was, like, look, this is how it probably should work, or this is a step in some progression. It’s unclear if it’s, like, the final step or something, but it’s a very natural thing to do that people probably should be and probably will be doing. I’m saying, if you want to talk about crazy stuff, it’s good to help make those steps happen faster, and it’s good to learn about. There’s lots of issues that occur in practice, even for things that seem very simple on paper, but mostly, like, the story of it’s just like, yeah, I think my sense of the world is things that look like good ideas on paper, just, like, often are harder than they look. But the world isn’t that far from what makes sense on paper. Like, large language models look really good on paper, and RLH looks really good on paper. And these things, I think, just work out in a way that’s yeah, I think people maybe overestimate or, like, maybe it’s kind of a trope, but people talk about, like, it’s easy to underestimate how much gap there is to practice, like, how many things will come up that don’t come up in theory. But it’s also easy to overestimate how inscrutable the world is. Like, the things that happen mostly are things that do just kind of make sense. Yeah, I feel like most ML implementation does just come down to a bunch of detail, though, of, like, build a very simple version of the system, understand what goes wrong, fix the things that go wrong, scale it up, understand what goes wrong. And I’m glad I have some experience doing that, but I think that does cause me to be better informed about what makes sense in ML and what can actually work. But I don’t think it caused me to have a whole lot of deep expertise or deep wisdom about how to close the gap."} {"prompt": "Yeah, but is there some tip on identifying things like Rlhf which actually do matter, versus making sure you don’t get stuck in some theoretical problem that doesn’t matter? Or is it just coincidence? Or I mean, is there something you can do in advance to make sure that the thing is useful?", "completion": "I don’t know if the RLHS story is, like, the best success case or something, but because the capabilities maybe I’d say more profoundly, like, again, it’s just not that hard a case. It’s a little bit unfair to be like, I’m going to predict the thing, which I pretty much think it was going to happen at some point. And so it was mostly a case of acceleration, whereas the work we’re doing right now is specifically focused on something that’s kind of crazy enough that it might not happen. Even if it’s a really good idea or challenging enough, it might not happen. But I’d say in general, and this draws a little bit on more broad experience more broadly in theory, it’s just like a lot of the times when theory fails to connect with practice. It’s just kind of clear it’s not going to connect. If you like, try if you actually think about it and you’re like, what are the key constraints in practice? Is theoretical problem we’re working on actually connected to those constraints? Is there something that is possible in theory that would actually address real world issues? I think the vast majority as a theoretical computer scientist, the vast majority of theoretical computer science has very little chance of ever affecting practice. But also it is completely clear in theory that has very little chance of affecting practice. Most of theory fails to affect practice, not because of all the stuff you don’t think of, but just because you could call it like dead on arrival, but you could also be like, it’s not really the point. It’s just like mathematicians also are like, they’re not trying to affect practice and they’re not like, why does my number theory not affect practice? It was kind of obvious. I think the biggest thing is just like, actually caring about that and then learning at least what’s basically going on in the actual systems you care about and what are actually the important constraints. And is this a real theoretical problem? The basic reason most theory doesn’t do that is just like, that’s not where the easy theoretical problems are. So I think theory is instead motivated by like, we’re going to build up the edifice of theory and sometimes there’ll be Opportunistic. Opportunistically we’ll find a case that comes close to practice, or we’ll find something practitioners are already doing and try and bring into our framework or something. But theory of change is mostly not this thing is going to make into practice. It’s mostly this is going to contribute to the body of knowledge that will slowly grow. And sometimes opportunistically yields important results."} {"prompt": "How big do you think a seed AI would be? What is the minimum sort of encoding of something that is as smart as a human?", "completion": "I think it depends a lot what substrate it gets to run on. So if you tell me how much computation does it get before or what kind of real world infrastructure does it get? You could ask what’s the shortest program, which if you run it on a million h 100s connected in a nice network with a hospitable environment will eventually go to the stars. But that seems like it’s probably on the order of tens of thousands of bytes or I don’t know if I had to guess the median, I’d guess 10,000 bytes."} {"prompt": "Wait, the specification or the compression of just the program?", "completion": "A program which went wrong. Oh, got it. But that’s going to be like, really Cheatsy. So they ask, what’s the thing that has values and will expand and roughly preserve its value? Because that thing, the 10,000 byte thing, will just lean heavily on evolution and natural selection to get there for that. Like, I don’t know, million bytes, million bytes, 100,000 bytes, something like that."} {"prompt": "Do you think AI lie detectors will work where you kind of just look at the activations and not find explanations in the way you were talking about with Heuristics, but literally just like, here’s what truth looks like, here’s what lies look like. Let’s just segregate the lane space and see if we can identify the two.", "completion": "Yeah, I think to separate the like just train a classifier to do it is a little bit complicated for a few reasons and may not work. But if you just brought them to space and say like, hey, it’s like you want to know if someone’s lying, you get to interrogate them, but also you get to rewind them arbitrarily and make a million copies of them. I do think it’s pretty hard to lie successfully. You get to look at their brain even if you don’t quite understand what’s happening. You get to rewind them a million times. You get to run all those parallel copies into gradient descent or whatever. I think there’s a pretty good chance that you can just tell if someone is lying, like a brain emulation or an AI or whatever, unless they were aggressively selected. If it’s just they are trying to lie well rather than it’s like they were selected over many generations to be excellent at lying or something, then your ML system hopefully didn’t train it a bunch to lie. And you want to be careful about whether your training scheme effectively does that. Yeah, that seems like it’s more likely than not to succeed."} {"prompt": "And how possible do you think it will be for us to specify human verifiable rules for reasoning such that even if the AI is super intelligent, we can’t really understand why it does certain things. We know that the way in which it arises at these conclusions is valid. Like, if it’s trying to persuade us to something, we can be like, I don’t understand all the steps, but I know that this is something that’s valid and you’re not just making shit up.", "completion": "That seems very hard if you wanted to be competitive with learned reasoning, it depends a little bit exactly how you set it up. But for the ambitious versions of that, let’s say it would address the alignment problem, they seem pretty unlikely, like 5% kind of thing."} {"prompt": "Is there an upper bound on intelligence? Not in the near term, but just like super intelligence at some point. How far do you think that can go?", "completion": "It seems like it’s going to depend a little bit on what is meant by intelligence. It kind of reads as a question that’s similar to is there an upper bound on strength or something? There are a lot of forms. I think it’s like the case that I think there are sort of arbitrarily smart input output functionalities and then if you hold fixed the amount of compute, there is some smartest one if you’re just like, what’s the best set of ten to the 40th operations? There’s only finitely many of them. So some best one for any particular notion of best that you have in mind? So I guess I’m just like for the unbounded question where you’re allowed to use arbitrary description complexity and compute, like probably no and for the I mean, there is some optimal conduct if you’re like I have some goal in mind and I’m just like, what action best achieves it? If you imagine like a little box embedded in the universe, I think there is kind of just like an optimal input output behavior. So I guess in that sense I think there is an upper bound, but it’s not saturatable in the physical universe because it’s definitely exponentially slow, right?"} {"prompt": "Yeah. Because of comms or other things or heat. It just might be physically impossible to instagram something smarter than this.", "completion": "Yeah, I mean, like, for example, if you imagine what the best thing is, it would almost certainly involve just like simulating every possible universe. It might be in modular moral constraints, which I don’t know if you want to include like so that would be very slow. It would involve simulating like, I don’t know exactly how slow, but like double exponential very slow."} {"prompt": "Carl Schulman laid out his picture of the intelligence explosion in the seven hour episode. I know you guys have talked a lot. What about his basic is? Do you have some main disagreements? Is there some crux that you guys have explored?", "completion": "It’s related to our timelines discussion from yeah, I think the biggest issue is probably error bars where Carl has a very software focused, very fast kind of takeoff picture. And I think that is plausible, but not that likely. I think there’s a couple of ways you could perturb the situation and my guess is one of them applies. So maybe I have like I don’t know exactly what Carl’s probability is. I feel like Carl’s going to have like a 60% chance on some crazy thing that I’m only going to assign like a 20% chance to or 30% chance or something. And I think those kinds of perturbations are like one, how long a period is there of complementarity between AI capabilities and human capabilities which will tend to soften takeoff? Two, how much diminishing returns are there on software progress, such that is a broader takeoff involving scaling, electricity production and hardware production. Is that likely to happen during takeoff, where I’m more like 50 50 or more stuff like this?"} {"prompt": "Yeah. Okay, so is it that you think the alternate constraints will be more hard? The basic case he’s laid out is that you can just have a sequence of things like flash attention or Moe, and you can just keep stacking these kinds of things on.", "completion": "I’m very unsure if you can keep stacking them or like it’s kind of a question of what’s like, the returns curve and Carl has some inference from historical data or some way he’d extrapolate the trend. I am more like 50 50 on whether the software only intelligence explosion is even possible, and then like a somewhat higher probability that it’s slower than why."} {"prompt": "Do you think it might not be possible?", "completion": "Well, the entire question is like, if you double R and D effort, do you get enough additional improvement to further double the efficiency? And that question will itself be a function of your hardware base, like how much hardware you have. And the question is like, at the amount of hardware we’re going to have and the level of sophistication we have as the process begins. Is it the case that each doubling of actually the initial only depends on the hardware, or like, each level of hardware will have some place at this dynamic asymptotes so the question is just like, for how long? Is it the case that each doubling of R and D at least doubles the effective output of your AI research population? And I think I have a higher probability on that. I think it’s kind of close. If you look at the Empirics, I think the Empirics benefit a lot from continuing hardware scale up so that the effective R and D stock is significantly smaller than it looks, if that makes sense."} {"prompt": "What are the Empirics you’re referring to?", "completion": "So there’s kind of two sources of evidence. One is like, looking across a bunch of industries at like, what is the general improvement with each doubling of either R and D investment or experience, where it is quite exceptional to have a field with not anyway. It’s pretty good to have a field where each time you double R and D investment, you get a doubling of efficiency. The second source of evidence is on actual algorithmic improvement in ML, which is obviously much, much scarcer. And there you can make a case that it’s been like each doubling of R and D has given you roughly a forex or something increase in computational efficiency. But there’s a question of how much that benefits. When I say the effect of R D stock is smaller, I mean we scale up. You’re doing a new task like every couple years, you’re doing a new task because you’re operating a scale much larger than the previous scale. And so a lot of your effort is how to make use of the new scale. So if you’re not increasing your installed hardware base or just flat at a level of hardware, I think you get much faster diminishing returns than people have gotten historically. I think Carl agrees, in principle, this is true. And then once you make that adjustment, I think it’s, like, very unclear where the empirics shake out. I think Carl has thought about these more than I am, so I should maybe defer more. But anyway, I’m at like 50 50 on that."} {"prompt": "How have your timelines changed over the last 20 years?", "completion": "Last 20 years?"} {"prompt": "Yeah. How long have you been working on anything related to AI?", "completion": "So I started thinking about this stuff in 2010 or so. So I think my earliest timeline prediction will be in 2011. I think in 2011, my rough picture was like, we will not have insane AI in the next ten years. And then I get increasingly uncertain after that. But we converged to 1% per year or something like that. And then probably in 2016, my take was, like, we won’t have crazy AI in the next five years, but then we converged to, like, one or 2% per year after that. Then in 2019, I guess I made a round of forecasts where I gave like 30% or something to 25% to crazy Eye by 2040 and like 10% by 2030 or something like that. So I think my 2030 probability has been kind of stable, and my 2040 probability has been going up. And I would guess it’s too sticky. I guess that 40% I gave at the beginning is just, like, from not having updated recently enough, and I maybe just need to sit down. I would guess that should be even higher. I think, like 15% in 2030. I’m not feeling that bad about this is just like, each passing year is, like, a big update against 2030. We don’t have that many years left, and that’s roughly counterbalanced with AI going pretty well. Whereas for the 2040 thing, the passing years are not that big a deal. And as we see that things are basically working, that’s like, cutting out a lot of the probability of not having AI by 2040. My 2030 probability up a little bit, like, maybe twice as high as it used to be or something like that. My 2040 probability up much more significantly."} {"prompt": "How fast do you think we can keep building Fabs to keep up with the eye demand?", "completion": "Yeah, I don’t know much about any of the relevant areas. My best guess is my understanding is right now, like 5% or something of the next year’s total or best process. Fabs will be making AI hardware, of which only a small fraction will be going into very large training runs. Like, only a couple. So maybe a couple of percent of total output, and then that represents maybe like 1% of total possible output. A couple of percent of leading process 1% of total or something. I don’t know if that’s right, but I think it’s like the rough ballpark we’re in. I think things will be pretty fast. You scale up for the next order of magnitude or two from there because you’re basically just shifting over other stuff. My sense is it would be like years of delay. There’s like, multiple reasons that you expect years of delay for going past that, maybe even at that you start having. Yeah, there’s just a lot of problems. Like building new fabs is quite slow and I don’t think there’s like, TSMC is not like, planning on increases in total demand driven by AI. Like kind of conspicuously not planning on it. I don’t think anyone else is really ramping up production in anticipation think and then similarly just building data centers of that size seems like very, very hard and also probably has multiple years of delay."} {"prompt": "What does your portfolio look like?", "completion": "I’ve tried to get rid of most of the AI stuff that’s Plausibly implicated in policy work or like CEG advocacy on the RSP stuff for my involvement with Anthropic."} {"prompt": "What would it look like if you.", "completion": "Had no conflicts of interest and no inside? Like, I also still have a bunch of hardware investments which I need to think about, but I don’t know a lot of TSMC. I have a chunk of Nvidia, although I just keep betting against Nvidia constantly since 2016 or something. I’ve been destroyed on that bet. Although AMD has also done fine. The case now is even easier, but it’s similar to the case in the old days, just a very expensive company. Given the total amount of R and D investment they’ve made, they have like, whatever, a trillion dollar valuation or something that’s like very high. So the question is, how expensive is it to make a TPU? So it actually outcompetes H 100 or something. And I’m like, wow, it’s real level, high level of incompetence if Google can’t catch up fast enough to make that trillion dollar valuation not justified."} {"prompt": "Whereas with TSMC they have a harder remote, you think?", "completion": "Yeah, I think it’s a lot harder, especially if you’re in this regime where you’re trying to scale up. So if you’re unable to build fabs, I think what will take a very long time to build as many fabs as people want, the effect of that will be to bid up the price of existing fabs and existing semiconductor manufacturing equipment. And so just those hard assets will become spectacularly valuable, as will the existing GPUs and the actual yeah, I think it’s just hard. That seems like the hardest asset to scale up quickly. So it’s like the asset, if you have like a rapid run up, it’s the one that you’d expect to most benefit. Whereas Nvidia’s stuff will ultimately be replaced by either better stuff made by humans or stuff made by with AI assistance. Like the gap will close even further as you build AI systems."} {"prompt": "Right. Unless Nvidia is using those systems.", "completion": "Yeah, the point is just like anybody will so dwarf past R D and there’s like just not that much stickiness. There’s less stickiness in the future than there has been in the yeah, I don’t know. So I don’t want to not commenting for any private information just in my gut having caveatted, this is like the single bet I’ve most okay not including Nvidia in that portfolio."} {"prompt": "And final question, there’s a lot of schemes out there for alignment and I think just like a lot of general takes and a lot of this stuff is over my head where I think I literally it took me like weeks to understand the mechanistic anomaly stuff you work on without spending weeks.", "completion": "How do you detect bullshit?"} {"prompt": "People have explained their schemes to me and I’m like, honestly, I don’t know if it makes sense or not with you. I’m just like I trust Paul enough that I think there’s probably something here if I try to understand this enough. But how do you detect bullshit?", "completion": "Yeah, so I think it’s depends on the kind of work. So for the kind of stuff we’re doing, my guess is like most people there’s just not really a way you’re going to tell whether it’s bullshit. So I think it’s important that we don’t spend that much money on the people we want to hire are probably going to dig in in depth. I don’t think there’s a way you can tell whether it’s bullshit without either spending a lot of effort or leaning on deference with empirical work. It’s interesting in that you do have some signals of the quality of work. You can be like, does it work in practice? Does the story? I think the stories are just radically simpler and so you probably can evaluate those stories on their face. And then you mostly come down to these questions of like, what are the key difficulties? Yeah, I tend to be optimistic when people dismiss something because this doesn’t deal with a key difficulty or this runs into the following insurable obstacle. I tend to be a little bit more skeptical about those arguments and tend to think, like, yeah, something can be bullshit because it’s not addressing a real problem that’s I think the easiest way this is a problem someone’s interested in that’s just not actually an important problem, and there’s no story about why it’s going to become an important problem. E g, like it’s not a problem now and won’t get worse or it is maybe a problem now, but it’s clearly getting better. That’s like one way and then conditioned on passing that bar, like dealing with something that actually engages with important parts of the argument for concern and then actually making sense empirically. So I think most work is anchored by source of feedback is like actually engaging with real models. So it’s like, does it make sense how to engage with real models? And does the story about how it deals with key difficulties actually make sense? I’m pretty liberal past there. I think it’s really hard to, like, eg. People look at mechanistic, interpretability and be like, well, this obviously can’t succeed. And I’m like, I don’t know. How can you tell? It obviously can’t succeed. I think it’s reasonable to take total investment in the field. How fast is it making progress? How does that pencil I think most things people work on, though, actually pencil pretty fine. They look like they could be reasonable investments. Things are not, like, super out of whack."} {"prompt": "Okay, great. This is, I think, a good place to close. Paul, thank you so much for your time.", "completion": "Yeah, thanks for having me. It was good chatting."}