diff --git "a/ai_safety_talks.jsonl" "b/ai_safety_talks.jsonl" new file mode 100644--- /dev/null +++ "b/ai_safety_talks.jsonl" @@ -0,0 +1,8 @@ +{"id": "f7f410439a2c91af3f647cc1c1515f3f", "title": "Risks from Learned Optimization: Evan Hubinger at MLAB2", "url": "https://www.youtube.com/watch?v=OUifSs28G30", "source": "ai_safety_talks", "source_type": "youtube", "text": "hello everybody I am Evan hubinger I am\na research fellow at Mary I used to work\nat open AI I've done a bunch of stuff\nwith Paul Christiano uh I also wrote\nthis paper that I'm talking about today\nuh\nlet me very quickly raise hands if you\nhave had any experience with the paper\nI'm going to be talking about let's see\nokay okay pretty good sweet okay uh\nwe could even potentially I could give a\nfollow-up talk that covers more advanced\nstuff but okay I think that was not\nquite enough hands so I think I'm going\nto be doing this talk and hopefully\nit'll help people at least work through\nthe material understand stuff better I\nthink you will probably understand it\nbetter from having seen the talk at the\nvery least maybe at some other point if\nI stick around I can give another talk\nokay so yes let's get started\nso let's see uh about me uh this talk\nwas made when I was at openai I'm\ncurrently Miri I did other stuff before\nthat\num I've given this talk in a bunch of\nplaces it's based on this paper\nyou know it I think some of you\nokay what are we talking about\nokay so I want to start uh with just\ngoing back and trying to understand what\nmachine learning does and how it works\nuh because it's very important to\nunderstand how machine learning works if\nwe're going to try to talk about it uh\nso what does machine learning do so so\nfundamentally you know any machine\nlearning training process it has some\nmodel space some you know large space of\nalgorithms and then does some really big\nsearch over that algorithm space to find\nsome algorithm which performs well\nempirically on some uh set of data on\nsome loss function over that data\nokay this is what essentially everything\nlooks like this is what RL looks like\nthis is what you know supervised\nlearning looks like this but you know\nfine-tuning looks like everything looks\nessentially like we have some really big\nparameterized space we do a big search\nover it we find an algorithm which does\nwell on some loss over some data\nokay so I think that when a lot of\npeople like to think about this process\nthere's an abstraction that is really\ncommon in thinking about it and an\nabstraction I think can be very useful\nand I call that abstraction that he does\nthe right thing abstraction\nso well when we trained this you know\nmodel when we produced this uh you know\nparticular parameterization this\nparticular algorithm over this\nparticular data\num well we selected that algorithm to do\na good job on this loss on that data and\nso we sort of want to conceptually think\nabout the algorithm as Trying to\nminimize the loss you know if you're\nlike how is this algorithm going to\ngeneralize you can sort of think well\nyou know what would it do uh what it\nwould that would be loss minimizing what\nwould be the sort of loss minimizing\nBehavior uh but of course this is not\nliterally true we did not literally\nproduce an algorithm that is in fact\nminimizing the loss on all off dispution\npoints what we have produced is we\nproduce an algorithm that empirically\nwas observed to minimize the loss on the\ntraining data but in fact when we move\nit into new situations uh you know it\ncould do anything\num but you know well we selected to\nminimize the loss and so we'd like to\nthink uh probably it's going to do\nsomething like the loss minimizing\nBehavior\nokay so uh this abstraction you know\nlike all abstractions that are not\ntrivial are leaky and we are going to be\ntalking about ways in which this\nabstraction is leaky\nokay so I'm going to be talking about a\nvery specific situation uh where I think\nthis abstraction can be leaky in a way\nthat I think is problematic so that\nsituation is a situation where you have\nuh a the algorithm itself is doing some\nsort of optimization so what do I mean\nby that so we're going to say that a\nsystem is an Optimizer if it is\ninternally searching through some space\nof possible uh plans strategies actions\nwhatever uh you know for those that\nscore highly on some criteria so you\nknow it's maybe maybe it's looking for\nactions that would get low loss maybe it\nis looking for actions that would get it\ngold coins maybe it is looking for\nactions that would do a good job of\npredicting the future whatever it's\nlooking for things that do a good job on\nsome criteria okay\nso just really quickly gradient descent\nis an Optimizer because it looks through\npossible algorithms to find those that\ndo a good job empirically on the loss uh\nyou know a Minimax algorithm is an\nOptimizer it looks for you know moves\nthat do a good job at playing the game\nhumans are optimizers we look for you\nknow plans strategies that accomplish\nour goals things that are not optimizers\na bottle cap is not an Optimizer this is\na classic example you could take a\nbottle of water and it's got the water\nin the bottle and it's like really good\nat keeping the water in the bottle you\ncan like turn the bottle over and the\nwater stays in but you know that's not\nbecause the bottle cap is doing some\nsort of optimization procedure it's\nbecause we did an optimization procedure\nto produce a bottle and that bottle is\nthen really good in the same way that a\ngradient descent process by default does\nan optimization procedure over the space\nof algorithms and produces a you know\nneural network that is not necessarily\nitself doing optimization you know it\nmay just be like the bottle cap it was a\nthing that was optimized to do something\nbut it's not necessarily doing any\noptimization itself certainly if I just\nrandomly initialize a neural network it\nis definitely only not going to be doing\nany optimization unless I get really\nreally unlucky or lucky\num\nokay all right so that is what an\nOptimizer is\nokay so what we want to talk about and I\nalluded to this previously is a\nsituation where the uh model itself uh\nthe algorithm that you found when you\ndid this big search is itself an\nOptimizer it is doing some optimization\ninternally inside of your neural network\nuh so I'm going to give names to things\nin in that particular situation where\nthe thing is an Optimizer in that\nsituation we're going to call the grainy\ndescent process that did a bunch of\noptimization on top to produce the model\nwe're going to call it a base Optimizer\nand we're going to call the thing that\nit optimized the algorithm that it found\nthat gradient descent found that was in\nfact doing some optimization we're going\nto call it a mace Optimizer what does\nthat mean so Mesa is sort of the\nopposite of meta you know oftentimes you\nmay be familiar with meta learning you\nknow we have an optimization process\nthen we put a little meta learning\noptimization process on top of it that\nlike searches over possible ways the\nlearning process could go and we're like\nwell that's sort of what happened here\nyou know you had this gradient descent\nprocess and you're sort of grading\ndescent process turned into a meta\nlearning process now instead of\nsearching over you know you know a whole\nSpace of algorithms it's sort of\nspecifically searching over learning\nalgorithms and so we're going to say\nwell essentially the relationship to the\nbase Optimizer to the model is very\nsimilar to the relationship between a\nmeta Optimizer and a sort of the thing\nthat meta Optimizer is optimizing so\nwe're going to say well it's one meta\nlevel below it's a Mesa optimizer\nokay\nall right so some things that people\noften misunderstand here so when I say\nmace Optimizer I don't mean like some\nsubsystem or some you know component of\nthe model I just mean like a model you\nknow it's a neural network and it's\ndoing some stuff and you know in\nparticular we're referring to a\nsituation where it is doing you know\nsome sort of search it's you know\nsearching over some space of possible\nplans strategies whatever for something\nwhich does a good job on some criteria\nso it's the whole trained model\num okay so I talked a little bit about\nthe relationship with meta learning uh\nbut you know essentially you can sort of\nthink about this as spontaneous meta\nlearning you know uh you you didn't sort\nof think you were going to be doing meta\nlearning that you were doing\noptimization over learning processes but\nin fact the algorithm that your grading\ndescent process found that was doing a\ngood job on the task was itself a\nlearning algorithm and so now you're\nyou're in the business of doing meta\nlearning because that's what algorithm\nyou found\nokay and so the difficulty you know is\nis in this situation controlling what\nwhat we're going to learn\nokay\nso uh right so we have this does the\nright thing abstraction that you know we\nsaid well it's leaky sometimes but at\nleast helps us reason about models in\nother times uh so we can ask but what is\nthe does the right thing abstractions\nsay in this situation where you have a\nmodel that is itself doing optimization\nin the situation where you have a mace\nOptimizer well so in that situation it\nshould say you know if the model is\ndoing the right thing which is you know\nit's actually out of distribution going\nto try to minimize the loss then it\nshould be whatever the thing that this\nsort of Mace Optimizer is searching for\nwhatever you know optimization it's\ndoing that optimization should be\ndirected towards the goal of minimizing\nthe loss\nokay so this is one abstraction that you\ncould use to try to think about this\nprocess but of course as we stated this\nis not the actual process that we are\ndoing uh you know so so we don't know\nwhether this abstraction actually holds\nand so what we want to know is what\nhappens when this abstraction is leaky\nokay so when it's leaky we're sort of\ngoing to have two alignment problems\nwe're going to have an outer alignment\nproblem which is like uh uh make it so\nthat the you know base objective\nwhatever the loss function is you know\nthat's the sort of thing that we would\nwant and then an inner alignment problem\nwhich is make it so that the main\nsubjective the thing that the thing is\nactually pursuing uh is in fact the\nthing that we wanted it to pursue\none uh caveat that I will say here so I\nthink that this sort of this sort of\ntraditional setup really mostly applies\nin situations where the sort of goal\nthat you have for how you're going to\nalign the system is we're going to first\nspecify some loss function and then\nwe're going to make the does the right\nthing abstraction hold we're going to\nspecify some loss function that we'd\nlike and then we're going to get the\nmodel to optimize it I will say that's\nnot the only way that you could produce\nan align system right so an alternative\nprocedure would be specify some loss\nfunction that you know is bad if the\nthing actually optimizes for that loss\nfunction you know that it would end up\ndoing uh you know producing bad outcomes\nand then also get it to not optimize for\nthat get it to do some totally different\nthing that nevertheless results in good\nbehavior so and in fact I think that's\nactually a pretty viable strategy and\none that we may want to pursue but for\nthe purposes of this talk I'm going to\nbe assuming that's not the strategy that\nwe're going with that the strategy we're\ngoing with is we're going to write down\nsomething uh you know that specifies you\nknow some loss or reward uh that like\nactually reflects what we care about and\nthen we're going to try to get them all\nall to sort of actually care about that\nthing too but I just want to preface\nwith that's only one strategy and not\neven necessarily the best one but it is\none that I think is at least easy to\nconceptualize and that we can sort of uh\nyou know understand what's going on if\nif that's the strategy we're pursuing\nquestion\n[Music]\na thing that we don't want to do with\nthe training to not do that yeah so\nhere's a really simple example here's a\nreally simple example this is not\nsomething I think we should literally do\nbut here's a simple example that might\nlike illustrate the point so let's say\nwhat we do is we have an environment and\nyou know we're going to do RL in that\nenvironment and we're going to set up\nthe environment to have incentives and\nstructures that are similar to the human\nancestral environment and then we're\nlike well you know if it's kind of\nsimilar to the human ancestral\nenvironment maybe it'll produce agents\nthat operate similarly to humans and so\nthey'll be aligned right that's a hope\nthat you could have and if that's your\nstrategy then the thing that you're\ndoing is you are saying I'm going to\nspecify some incentives that I know are\nnot aligned right like reproduction you\nknow natural selection that's not what I\nwant that's not what I care about but I\nthink that those incentives will\nnevertheless produce agents which\noperate in a way that is the the thing\nthat I want so so you could have that as\nyour strategy we're not talking about\nthat sort of a strategy though right now\nin this talk but but you but I don't\nwant to like rule it out I don't want to\nsay that like you could never do that in\nfact I think a lot of my favorite\nstrategies for alignment do look like\nthat though they don't look like the one\nI just described but\nokay but they do have the general\nstructure of like we don't necessarily\nwrite down a loss function that\nliterally captures what we want but uh\nsupposing we do we're going to try to\nexplore you know what this what this\nlooks like and in fact you know as we'll\nsee basically all of this is going to\napply to that situation too because what\nwe're the problem that we're going to\nend up talking about is just the central\nproblem of you know what happens when\nyou have some loss and then you want to\nunderstand what thing will your uh model\non you know if it is itself an Optimizer\nend up optimizing for\nokay is that clear does that make sense\nokay\ngreat\nokay so you've probably heard stories of\nouter alignment failures uh you know\nwhat they might look like you know\nclassical examples are things like\npaperclip maximizers you've got you know\nyou're you're like I want to make the\nmost paper clips in my paper clip\nFactory and then uh you know it it just\ndestroys the world because that's the\nbest way to make paper clips\num that is a classic example of an outer\nalignment failure what I would like to\nprovide is sort of more classic examples\nof inner alignment failures so we can\nsort of start to get a sense of what it\nmight look like if uh inter alignment\nyou know in the sense that we talked\nabout you know previously where you know\nthe the model ends up optimizing for a\nthing that is different than the sort of\nthing that you wanted to uh that you\nspecified might look like so so what\ndoes that look like well so when inner\nalignment fails it looks like a\nsituation that we're going to call\ncapability generalization without\nobjective generalization so what does\nthat mean so let's say I have a training\nenvironment and it looks like this brown\nmaze on the left and I've got this uh\nyou know some arrows that like Mark\nwhat's going on uh and you know the loss\nfunction you know the reward function\nthat I use I'm gonna we're gonna do like\nRL this environment I'm going to say you\nknow reward the agent for finishing the\nmaze I want it to get to the end\nokay and then I'm going to deploy to\nthis other environment where now I've\ngot a bigger maze it's blue instead of\nbrown and I have this green arrow at\nthis like random position\nand now I want to know what happens\nokay so you know here are some things\nthat could happen in this new\nenvironment when we ask you know what is\nthe generalization behavior of this\nagent so here's one thing that could\nhappen it could look at this you know\nmassive blue Maze and just like you know\nnot have any idea how to do anything and\nso just like you know randomly you know\ngoes off and does nothing you know\nuseful that would be a situation where\nyou know we're going to say it's\ncapabilities did not generalize you know\nit did not learn a general purpose\nenough algorithm for solving mazes that\nthe algorithm was able to generalize\nthis new environment\nokay but but it might so let's say\ncapabilities did generalize it actually\ndid learn some sort of general purpose\nmedia solving algorithm and knows how to\noperate in this environment well there's\nsort of two things that it could you\nknow potentially learn at least two uh\nit could learn to use those maze solving\ncapabilities to get to the end of the\nmaze which is the thing that we were\nrewarding it for previously or it could\nuse the maze solving capabilities to get\nto the Green Arrow\nuh and that you know if the strategy was\npursuing if the generalization it\nlearned was always go to the Green Arrow\nversus go always go to the end of the\nmaze you know in training those were\nindistinguishable uh but you know now in\nthis deployment environment they're you\nknow quite distinct\nokay and so uh let's say you know what\nwe really wanted right you know we\nreally wanted it to go to the end but\nyou know this green arrow just got moved\nand so we're like okay so what happens\nhere you know why is this bad that we\ncould end up in a situation where the\nmodel generalizes in such a way that it\ngoes to the Green Arrow instead well\nwhat's happened is we are now in a\nsituation where your model has learned a\nvery powerful general purpose capability\nwhich is the capability to solve mazes\nand that capability is misdirected uh\nyou have deployed a model that is in a\nsituation where it is it is very\npowerful and is actively using those\ncapabilities not to do the thing that\nyou you know wanted it to do uh so\nthat's that's bad that's dangerous right\nyou know we do not want to be in\nsituations where we are deploying\npowerful capable models in situations\nwhere they are using their capabilities\nuh where you know they don't actually\ngeneralize and that you know they don't\nactually match up with what we wanted\nthem to use those capabilities for\nokay so so that's concerning and that's\nthe thing that we want to avoid that's\nsort of what it looks like when we have\nthis sort of inner alignment failure\nokay so some more words we're going to\nsay that the situation where the model\nuh generalizes correctly on the sort of\nuh uh you know reward you know loss uh\nobjective that we want uh you know that\nactually is trying to finish the maze\neven out of distribution we're going to\nsay it's robustly aligned it's a\nsituation where you know it's alignment\nyou know it's it's you know actually\ngoing to pursue the correct thing is\nrobust to you know uh out of\ndistributional shifts whatever and we're\ngoing to say that if the model sort of\nlooks aligned over the training data but\nactually you know we can find situations\nwhere the model would not be aligned\nthen we're going to say it's\npseudo-aligned\nokay\num great all right yes robust\ntest data the test data we're going to\nsay well uh any this any data that it\nmight in fact encounter in the real\nworld if if the thing has you know is is\nactually robust in all situations it you\nknow it always pursues the right thing\nthen we're going to say it's robustly\nline importantly we're only talking\nabout the alignment here what we don't\ncare about is if the model like in some\nsituation its capabilities don't\ngeneralize right if it ends up in a\nsituation where like it's just like I\ndon't know what's going on and then it\ndoesn't do anything we're fine with that\nright that's still robust alignment\npotentially uh right we're sort of\nfactoring the capabilities in the\nalignment here we're saying that if it\nis capable of pursuing some complex\nthing it is always pursuing the thing\nthat we wanted it to pursue and here\nwe're saying it only looks like it's\npursuing the right thing on training but\nyou know out of uh distribution it might\nend up pursuing the wrong thing capably\njust to clarify robust alignment is\ndefined with respect to a particular\ndata distribution you can't speak about\nit in isolation\nuh no uh pseudo alignment is is defined\nspecifically relative to a data\ndistribution because it's defined\nrelative to the training bad\ndistribution but robust alignment is not\nrobust alignment is defined relative to\nany data it might ever encounter okay\nokay great\nall right so we have two problems yes\ndon't uh generalize are you doomed to\nhave a pseudo alignment because in cases\nwhere it doesn't know what to do it will\ndo things that are unintended well uh\nyou know I'm okay if it does things that\nare unintended we're worried if the like\noptimization is directed in an\nunintended place right that it does\npowerful optimization in the wrong you\nknow towards the wrong goal so like you\nknow if it's just like you know in the\nnew maze it just like Falls over and\ndoes like random you know stuff we don't\ncare right we're like okay whatever the\nproblem is a situation where it does\nknow how to solve complex bases you know\nit actually does have the capability uh\nyou know and then it uses like actively\noptimizes for something we don't want it\nto be optimizing for\nokay so like even if we have the May\nsolva and I was right I'll see a lion\nwe would we would call it robustly\naligned even if we like gave it a\nmassive Maze and then it tried to like\ntake over a piece of computational power\nto solve this maze because it was still\nactually applying right so in this\nsituation like you know just going back\nlike to here we're assuming that the\nloss function is actually reflects what\nwe care about right and we're like What\nwould it look like if the loss function\ndoes reflect what we care about to have\nan inner alignment failure so in that\nsituation you would be like well you\nknow your your strategy of like write\ndown a loss function that reflects\neverything I care about and then get a\nbottle's optimized for it you failed at\nthe point where the loss function you\nwrote down that you thought reflected\neverything you care about just said\noptimize for mazes but that was bad you\nshouldn't have done that that was\nactually not everything you cared about\nand so we're like that was where you\nwent wrong here\num I think that like I said though you\nknow you might not have this strategy\nbut like you know right now at least\nwe're assuming that it's your strategy\nand we're gonna say you know actually\nwhen you wrote down this last time you\nwere at least trying to get it to\ncapture you know everything that you\nwanted\nokay okay yes do you draw any\ndistinction between a model like\nspecifically optimizing for like some\ninternal golf versus like just\nstatistically preferring something\nyes yes we we absolutely are drawing a\ndistinction there we are saying that the\nthing is an Optimizer if it is like in\nfact has some search process inside of\nit that is you know you know searching\nfor things that do well according to\nsome objective uh you know this is the\ndifference right between the bottle cap\nyou know which like you know in fact\nsystematically prefers the water to be\nin a bottle but like isn't in fact doing\noptimization for it and the reason that\nwe're making this distinction is because\nwe're going to say well we're actually a\nlot more concerned about situations\nwhere the model is actively doing\noptimization that is misdirected then\nsituations where it has you know a bunch\nof you know heuristics that were\nselected in a problematic way and the\nreason we're more concerned about that\nis because misdirected optimization has\nthe ability to be a lot worse out of\ndistribution you know because it can you\nknow competently optimize for things\nthat we don't want to optimize for even\nin situations where you know we didn't\nselect it to be able to do that\nsorry\nif a model actually be an Optimizer that\nis a great question and that is what\nwe're going to be talking about next\nokay so yeah we're gonna be talking\nabout two problems here you know one is\nunintended optimization which is like\nyou know maybe you didn't want to be an\nOptimizer maybe you just wanted it to\nyou know be a bottle cap like you wanted\nto be a selection of heuristics you\ndidn't want to be doing any optimization\nwhy might you get optimization anyway\nand then the second question you know is\nthis inner alignment question which is\nokay if it is an Optimizer how do you\nget it to optimize for what you want\nokay let's talk about it okay what sort\nof machine learning systems are more or\nless likely to find optimizers that's\nthe question we're starting with\nokay so here's the first thing is look\nsearch algorithms are really good at\ngeneralizing so let's say we're in a\nsituation where you know we want our you\nknow algorithm to play go you know we\nwant to do a really good job at go\nplaying well there is a systematic uh\nsimilarity uh between all good go moves\nright the systematic similarity between\nall good go moves is that they are moves\nwhich have the property that if I do\nsome look ahead and you know try to see\nhow good is this move\nthey end up being really good right and\nso if I want to encode an algorithm\nwhich systematically produces good go\nmoves uh encoding search is a really\nsimple way to do that and it's a way\nthat generalizes in a lot of situations\nit's a way that you know if I go into\nsome new situation where you know I\ndon't necessarily have a bunch of\nheuristics that are well optimized for\nthat situation I nevertheless have the\nability to you know figure out what's\ngoing to do well you know the uh you\nknow the the you know the go engine\nwhich has a bunch of ability to do\nsearch you know can can find it you know\nabsolutely crazy billboard that it's\nnever seen before and still do you know\n10 steps of look ahead and come out with\nyou know some ideas of what things are\ngoing to be good in that situation\num and you know obviously in fact you\nknow we explicitly program search in and\nwe make you know uh go uh Bots you know\nalphago actually has this MCTS process\nuh okay so so you know there are\nsituations where you know uh if you have\nthis really you know a really diverse\nenvironment you have a lot of situations\nyou need to be able to handle being able\nto in each situation be able to search\nfor what the correct thing to do in that\nsituation is gives the ability to\ngeneralize better\nokay uh and then similarly also it's a\nform of compression so you know I said\nthat like well what is the regularity\nright between all good go moves well the\nregularity between them is when I do\nlook ahead they end up looking good\num and so if you were sort of asking\nwell what is the easiest way to compress\nthe set of all you know good go moves\nwell the way to compress the set of all\ngood go moves is an Optimizer that\nsearches and does look ahead to find\ngood go moves right\num and so if we expect that your model\nis going to be biased towards\num uh Simplicity which I think we should\nuh if you expect that then you should\nexpect that it's going to find these\nsorts of compressions ways of taking\nyour data and compressing it into a\nsimple form that you know uh you can\nstill be able to recapitulate it and in\nthe situation where the thing that\nyou're doing is you know some situation\nwhere you're trying to train it to you\nknow uh solve some complex task uh a\ngood way to compress a lot of complex\ntasks is to be able to do search why is\nthere Simplicity bias there's a bunch of\nstuff on this uh I think it's a little\nbit tricky\num\nmaybe a really sort of simple version of\nthis I don't this is not something I\nhave up here but maybe probably the best\nresult that I think argues for\nSimplicity is probably the mingard\nresult which shows that if you do\num sampling with uh replacement from the\nGalaxy initialization prior uh so this\nis a little bit of a weird thing to do\nbecause it's extremely you know\nuncompetitive it would take you know\nyears and years if you wanted to\nliterally just take you know initialize\na neural network over and over again\nuntil you've got a neural network which\nactually had good performance but you\ncould suppose you did this thing if\nyou're just like you know you keep\ninitializing it every single time you\nknow you never train it and you hope\nthat eventually the initialization\nactually just does well on your data if\nyou did this in fact the prior ends up\nbeing extremely similar to the prior you\nget from granny descent\num which sort of is showing that\nactually you know what greatness that is\ndoing it is sort of doing something like\nfinding the sort of minimal Norm in\ngaussian Norm space uh for you know\nsolution that is able to solve the\nproblem that sort of maps onto\nSimplicity in various ways there's a lot\nmore to say here\nyes you said prior twice the prior when\ndrawing this is the same as the\nposterior you get after after gradient\ndescent okay I mean it is also posterior\nin this case because it is a posterior\nis like the prior is gaussian\ninitialization and the posterior is take\nthe prior update on in fact when I\ninitialize and sample from the\ninitialization I get a model which has\ngood performance on my data and then I'm\nlike that posterior is extremely similar\nto the posterior of take a model\nactually train it and then see what I\nget okay okay there's a lot more to say\nhere but I'm not going to say too much\nmore you can ask me more later if you\nwant to know more about inductive biases\nokay uh but but whatever the point is\nthere's a lot of evidence I think that\nis biased on Simplicity okay but this is\nnot the not the only situation where we\nmight want where we sort of might expect\nauthorization another situation is and\nthis is pretty common there's a lot of\nsituations where we like to train you\nknow machine learning models to mimic\nhumans uh okay what is a property of\nhumans that we you know established\npreviously well humans are optimizers\nyou know there's lots of situations\nwhere if you're going to do a good job\nat mimicking humans you better be able\nto do a good job at uh you know\npredicting what we're going to do when\nyou know we have a goal and we're trying\nto optimize that goal you know being\nable to understand you know if a human\nwas trying to you know do X how would\nthey accomplish x uh you know if you\nknow being able to sort of work\nbackwards and be like okay you know how\nwhat would what would be necessary for\nthe human to be able to accomplish their\ngoal and you know how would they achieve\nthat is is you know something that's\npretty important if you want to be able\nto predict human behavior\nand so if we're predicting humans we\nshould also expect that our you know\nmodels you know are at least going to\ndevelop the capability to do\noptimization\num okay\nand okay so so some some things here\nthat are going to push you in various\ndifferent directions do you have a\nquestion\nokay uh so some things that are going to\npush you towards mace authorization here\nwell so uh you know large model capacity\nyou know just having the ability to\nimplement complex optimization\nalgorithms you know Simplicity bias\neither explicitly because you you have\nsome regularization implicitly just\nbecause you know gradient descent is\nregularized in this way uh as I was\ntalking about previously\num you know other things I didn't talk\nabout this but statefulness having the\nability to store state is the thing that\nreally improves your ability to do\noptimization over time uh because you\nknow you can sort of store the results\nof optimization and then you know\ncontinue iterating uh so like iterative\nstateful procedures are a lot easier to\nyou know produce optimization in things\nthat might disintensify this\noptimization if you just give the thing\noptimization so like I think if you have\nlike MCCS that's like part of your\narchitecture you're like a little bit\nless likely to just like learn\noptimization in the non-mcds parts\nbecause it's like already doing some\noptimization so it has less need to sort\nof implement it\num but it's not 100 True uh you know\nespecially because in fact when we do\nlike MCTS we usually distill the MCCS\ninto the policy Network and so you're\nlike actively training it to mimic a\nsearch process\num but you know at least the actual like\nyou know thing of giving it access to\nauthorization seems like it should you\nknow reduce the probability that\ndevelops it itself though it still could\nmaybe just because the optimization that\nit needs is different than the\noptimization you gave it\num time complexity bias I didn't mention\nthis but you know you could be in a\nsituation where where if you really\nincentivize the thing to perform its\ncomputation very quickly it's a lot\nharder to do optimization optimization\nwhile it's a very simple algorithm is\none that can take you know many steps of\nlook ahead to actually you know produce\nsome results and so you know uh if we\nbias the thing on any amount of time it\ntakes and it's speed that sort of\ndisincentivized optimization\nand also if we just give it really\nsimple tasks so you know I think you\nknow example of this might be we just\ngive it a task that is like literally\njust you know pre-training on webtext\nyou know we just give it a task that is\njust predict the next token you know\nreally simple tasks that uh you know\naren't very complex that don't have like\na bunch of moving pieces in terms of\nlike you know what the we're trying to\ntrain the thing to do then it you know\nuh you know potentially needs less uh uh\noptimization and probably still like at\nleast if you're trying to predict humans\nlike I said it probably still needs to\nhave the capability to do optimization\nbut it doesn't necessarily need to\ndirect that optimization towards some\nobjective that it's optimizing for\num okay so these are sort of some things\nI think might push you in one direction\nor the other\num I think it's a little bit tricky\nuh there's like you know a lot of things\npushing you in in you know different\ndirections here\num\nI think my guess is that uh you know at\nsome point for just four capabilities\nreasons people are going to want to\nbuild agents and their groups want to\nbuild agents which have the ability to\ncompetently take actions and do things\nin the world and I think when they do\nthat uh these sorts of you know\nprocedures that we are building for\ndoing these sorts of things are moving\nin the direction of the things that are\nincentivizing optimization\nso why is that well you know we're\nmoving towards doing more imitation of\nhumans uh we're moving towards you know\nbigger models moving towards uh more\nSimplicity bias this is actually a\nlittle bit tricky I didn't mention this\npreviously but larger models have a\nstronger Simplicity bias than smaller\nmodels\num that might be a little bit\ncounter-intuitive but it is true uh one\nway to see this is the double descent\ncurve maybe I'll just go back really\nquickly\num where if I have uh the like size of\nmy model\nand they like train the model over time\nyou're probably familiar with like the\nstandard sort of uh you know thing that\nyou get out of learning theory which is\nlike your train loss goes down and then\nyour test loss sort of you know first\nyou underfit and then you go up and then\nyou overfit but in fact if you like\nactually increase the size of the model\nfar past the point at which uh you know\nyou normally would in sort of standard\nlearning theory your test loss actually\ngoes down again and it goes lower than\nyou can get any point in the classical\nregime\num what's happening here is that um the\nsort of only way that this works is if\nthe like implicit biases that you get by\nmaking getting a larger model size are\nactually selecting amongst the larger\nspace of models for you know models that\nactually generalize better and the\nmodels to generalize better are you know\nthe simpler models and so as we increase\nthe size of the model we're actually\nincreasing the amount of Simplicity bias\nthat we're applying to get you know an\neven simpler and simpler and simpler\nsolution which is able to fit the\nproblem\num\nokay uh yes is that the same thing as\nsaying the larger models have the\nfreedom in their weights to explore\nthese simpler models and then end up\nfinding them whereas in a smaller model\nthey're too compressed to be able to to\nexplore that yes so it's a little bit\ncounter-intuitive but in fact the larger\nmodels have the ability to implement\nsimpler models right so if you think\nabout it like this\num you know there are algorithms which\nare very simple right you know I can\nwrite you know search and you know two\nlines of python or whatever right but\nyou know in fact you know influencing in\na small Network might be really might be\nreally difficult\num and so as the network gets larger\nthere's still a bias towards like simple\nalgorithms but it gets the ability to\nimplement more of the possible simple\nalgorithms that are available\nuh and but then it still it selects a\nsimple one\nokay there's a lot more to be said here\nyes so I take it there's a is there a\nversion of this plot where like instead\nof just complexity of age it somehow\nincorporates the whatever that bias is\nyou talked about and then this is this\nis this is traditionally model size so\njust like number of parameters\nbut you can actually do this with a\nbunch of you could you see double\ndescent graphs with with all sorts of\nthings on the x-axis but but\ntraditionally at least it's with model\nsize but I guess so I'm trying to wrap\nmy head around what's like unintuitive\nabout the right side and it's like or\nlike where why it shouldn't actually be\nunintuitive I guess and it seems like\nthe x-axis is just the complexity of our\nmodel class but it's so it's not\naccounting for how our training\nprocedure interacts with that model\nclass is that right no no this is after\ntraining\nso this is literally just sorry just\ncover this up and replace with number of\nparameters\nand then this is after training we're\nkeeping like uh training uh constant we\ncan we we train to convergence at every\npoint on this graph\nright but there there's some the point\nis\nthe x-axis on I'm maybe not making a\nvery interesting point but I'm just\ntrying to understand better so the the\nx-axis here only measures the complexity\nof like our function\nclass it does not\naccount for how that interacts with our\ntraining procedure it doesn't yeah it\ndoesn't account for the biases right so\nwe have some it it's this is just the\nsize of the model class right but then\non top of that model class we actually\nhave some some prior that selects out\nthe model that we actually like out of\nthat model class right and we're saying\nthat prior the sort of Prior that of\nlike gradient of machine learning that\nlike selects out you know uh the a like\nsimple you know it actually selects out\nthe simple simple algorithms out of that\nlarge model class so as we increase the\nmodel class we get simpler and simpler\nalgorithms out of it and so if we made a\nnew plot where the x-axis encountered\nencapsulated both the complexity of the\nclass and how our training procedure\ninteracts with it\nwould not get\num yes so in fact one thing that I will\nsay here is that sort of what's going on\nis that there's a disconnect between the\norder in which new models get added to\nthe model class and the criteria that\ngrading descent uses to select from\namongst those models right it is not the\ncase that you know what this tells us is\nthis is not the case that new models get\nadded to the model class of like what\nalgorithms are available to be learned\nin the same order in which grading\ndescent would prefer them like it's not\nit doesn't just start with the most\npreferred algorithm then the second most\npreferred algorithm it's just like\nrandomly chucking them in there and then\ngradescent is like doing some you know\nvery you know special thing to actually\npick out the best one from amongst them\nokay if it were the case that the sort\nof order in which algorithms were added\nto the support of your prior were\nactually like in Simplicity order then I\nthink you would not see this\ncontinue on this line go for it\noh uh are you basically arguing in favor\nthat the lottery ticket hypothesis here\nthat no or no not here no in fact I\nthink lottery tickets hypothesis people\nget very confused about I think it\ndoesn't say people think it says I think\nit's not super relevant here we could we\ncould we could talk about it later maybe\nyes\num\nnoted that uh in a neural network\nencoding search requires like a like a\nlarge enough model you can't do it in a\ntwo-layer Network it's impossible\num\nseems pretty hard to do in a two-layer\nNetwork yeah so even though we regard\nsearch to be simple shouldn't we sort of\nbe conditioning our Simplicity based on\nthe architectural model well the sort of\nSimplicity that I care about is the\nSimplicity of like if I have a really\nreally large model and the Green Design\ncan select a huge amount of possible\nalgorithms from amongst that set what\nsort of ones does it select for right\nI'm not as much interested in the sort\nof Simplicity that is like if I have a\ntiny model what are the algorithms that\nI can Implement right and so when I say\nSimplicity the the place where I think\nSimplicity bias occurs in machine\nlearning is not the like oh if I have a\nreally small model I just can't\nImplement some things the place where I\nthink the Simplicity bias occurs is if I\nhave a really large model and I can\nImplement a ton of things which ones\ndoes green Nissan actually end up\nselecting that's what I think is\nSimplicity bias not the other one\ncouldn't like sorry and in fact I think\nthat the like architectural bias you get\nfrom having a small model is more like\nspeed bias it actually probably\ndisincentivizes optimization like we're\nsaying because it just doesn't have\nenough serial uh computation time to do\nup just do it\nwouldn't it be though that uh given that\nit's just quite impossible to do it in a\nsmall model\num there's less of a Simplicity biased\nand more just\nonce you make once you reach the like\nrequired size then that's an available\nmethod to implement and then at that\npoint it just berries a tiny bit based\non Simplicity and generalization\nI think that the claim I'm making is\nthat actually you know some models are\nreally vastly simpler but they only\nbecome accessible to be implemented by\ngrain descent once you have a larger\nmodel we we can talk about this this\nlater maybe\nokay let's keep going okay uh great okay\nso that's that's you know part one which\nis like okay here are some things that\nwould push you in directions for or\nagainst learning optimization I think my\ntake is where you know it's going to\nhappen you know we're going to at least\nbe able to have models which are capable\nof doing optimization uh you know it\nseems like all of the ways in which we\nwant to build powerful models and you\nknow have models that you can generalize\nin you know new situations they're gonna\nthey're gonna be able to do uh\noptimization okay so if that's gonna\nhappen\nuh will it be doing the right thing\nokay so first\nwhat does it look like for a model to\nhave a an objective uh you know for a\nmace authorizer to have a mace objective\nthat is uh misaligned with uh you know\nnot the same as the loss function right\ndoesn't obey this does the right thing\nabstract it okay so fundamentally you\nknow I think the most basic case the one\nthat we're primarily going to be talking\nabout is a proxy pseudo alignment which\nis a situation where uh you know there\nis some uh correlation uh there's some\ncorrelational structure between a proxy\nthat the model is trying to optimize for\nand the you know actual objective that\nwe wanted to optimize for\nuh so you know in a sort of causal graph\nlanguage you can think about them as you\nknow having some common ancestor you\nknow if I am trying to optimize for this\nand there's some common ancestor between\nyou know the base objective and me well\nthen I need to make this x large because\nthat is how I make my thing large and\nthat will bite uh you know as a side\neffect make this thing large as well and\nso if we're in a situation like this\nthen we're going to be in a situation\nwhere uh you know it could be the case\nthat anything which has this\ncorrelational structure uh you know to\nthe thing we care about could be the\nthing that your model learns so you know\nfor example we had that situation where\nyou know it learned to go to the Green\nArrow rather than the end of the maze\nbecause in training those two things\nwere highly correlated\nokay\num there are other things as well I'll\njust like mention really briefly you\ncould also have something like\nsub-optimality suit alignment where you\nknow the reason the thing looks aligned\nis because it's actually doing\noptimization in some really bad way and\nso you know in fact it's objective it's\njust like some crazy thing uh you know\nan example I like of this is like you're\ntrying to make a cleaning robot and the\ncleaning robot you know believes uh it\nhas an objective of like minimizing the\namount of atoms in existence and it\nmistakenly believes that when it like\nvacuums up the dust the dust is just\nlike annihilated and so it's like oh\nthis is this is great but then you know\nlater it discovers that in fact you know\nthe dust does not get annihilated it\njust like goes into the vacuum and you\nknow it stops being aligned so that\nwould be a situation where like it\nactually had some you know insane\nobjective but that objective like in\nfact exhibited a line behavior and\ntraining because it also had some insane\nbeliefs so you can also get some weird\ncancellations and stuff like this but\nwe're mostly not going to be talking\nabout this mostly focusing on the proxy\ncase was just like well you know it kind\nof understands what's going on and it\njust cares about something in the\nenvironment which is correlated it but\nnot the same as the thing we want\nokay so that's sort of what pseudo\nalignment looks like why would you get\nit\nall right so the most fundamental thing\nis just unidentifiability look you know\nany complex environment it just has a\nshitload of proxies there's just like a\nton of things in that environment which\none could pay attention to and which\nwould be correlated with the thing you\ncare about and so as you sort of you\nknow you train agents in very complex\nenvironments you're just gonna get a\nbunch of sort of possible things that it\ncould learn\nuh and so by default you know if it just\nlearns anything which is correlated you\nknow uh it'd be really unlikely and\nweird if it learned exactly the thing\nthat you wanted\nuh because you know all of these things\nyou know to the extent that they're\ncorrelated with the thing you're trying\nto train it on we'll still have good\ntraining performance\num okay uh so you know just operate you\nknow probably some of these proxies are\nlikely to be simpler and easier to find\nthan the you know one that you actually\nwant\nunless you have some really good reason\nto believe that the one you actually\nwant is like you know very simple uh and\neasy to find\nokay in addition uh you know it I think\nthe situation is sort of worse than that\nbecause actually some proxies can be a\nlot faster than others so what do I mean\nby that so so we have like an example\nhere so we're like look you know a\nclassic you know example that people\nlike to talk about you know in terms of\nthis sort of thing is like Evolution you\nknow Evolution you know also was sort of\nlike an optimization process selecting\nover humans and selecting the humans uh\nto do a good job on you know uh you know\nnatural section right you know pass on\nyour DNA into the Next Generation okay\nuh so let's suppose in an alternate\nreality that you know uh natural\nselection you know Evolution actually\nproduced humans that really just care\nabout like you know how much their\nalleles you know are represented in you\nknow future Generations that's like you\nknow the only thing that the humans care\nabout okay but here's the problem with\nthis is that this is just like a\nterrible optimization task right you\nhave a baby right and the baby like\nstubs their toe and they're like you\nknow what do I do about this right\nlike how is this going to influence my\nlike chances of like in the future of\nbeing able to like mate and then like\npass my DNA down and like hmm like what\nis this but like no right but like\nthat's terrible right like instead a\nmuch simpler and faster to compute\neasier strategy for the baby to\nimplement is just like you know have a\npain mechanism that is like okay we've\ndeveloped this you know heuristic that\nlike you know all pain is always going\nto be bad and so just like you know\nshunt that to you know the negative\nreward system and so we're like okay so\nyou know we actually if we're in a\nsituation where we're training on\nsomething which is relatively complex\nand difficult to specify in terms of the\nagency you know input something like DNA\nyou know we're trying to train on pass\nyour DNA onto the Next Generation uh you\nknow it's not going to learn to care\nabout passing the DNA on the Next\nGeneration it's going to learn to care\nabout you know reducing pain uh because\nyou know reducing pain is something that\nis much more easily accessible to the\nthe baby and it's something which the\nbaby can actually Implement effectively\nand doesn't rely on doing some really\ncomplex difficult optimization process\nto be able to determine how to optimize\nfor that thing\nokay so you know we shouldn't expect to\nget you know some really complex\ndifficult to optimize for a thing we\nshould we should expect to get you know\nthese sorts of faster easier to optimize\nfor proxies\nokay and also some proxies are simpler\nright so I mentioned this right you know\npain also has this property that you\nknow it's extremely accessible in terms\nof the input data of the baby you know\nthe baby can just detect oh man you know\nsomething has happened to my leg that's\nbad uh and it doesn't have to do some\ncomplex thing that's like okay I can\ninfer that probably you know there is\nDNA inside of my cells but I'm not like\nyou know 100 sure I should probably go\nget like a microscope and then I can\nlike figure it out you know it doesn't\nhave to do that it just like you know is\nextremely accessible you know the thing\nthat it cares about in terms of its\ninput data and the data that's available\nto it and so we should expect that again\nyou know the sort of properties you're\ngoing to be learning are the sorts of\nproxies that are accessible and simple\nin terms of the data that the model has\naccess to Yes um\nyeah because\nwhen you're consider in the case of like\na an artificial neural network\nthe amount of efforts required is just\none forward pass through a simple proxy\nor a complicated thought process that\nleads to an output so\num yeah so the way I think you should\nthink about this is that well you know\ntechnically the amount of computation it\nperforms in any forward pass is fixed\nbut it has to ration that computation\nright it's not the case that it gets\ninfinite computation in fact it gets\nlike you said a finite amount of\ncomputation and so it needs to ration\nthat computation to do the things that\nare most useful for it right and so if\nit's wasting a bunch of the computation\ndoing all of this reasoning about how to\nyou know get the most DNA you know it's\nnot going to have room to you know for\nexample just do the look ahead five\nsteps more right it's not going to have\nthe you know capacity left to do other\nthings which will help it with\nperformance more and so you know because\nit is computation limited we should\nexpect that that computation will be\nrationed in ways that result in you know\nsimple fast proxies\nokay great so all right so it seems like\nwe're probably going to get proxies that\nare simpler and faster than you know uh\nthan you know other things that we might\nwant\num okay so so one of the things again\nthat you know push you in different\ndirections here right so you know we\nhave uh time complexity bias you know if\nyou if you try if you're trying to you\nknow limit the amount of computation\nyou're gonna be biased towards you know\nfast proxies if you try to you know have\na Simplicity bias uh you're going to be\nbiased towards having uh you know uh\nsimple proxies right that are easy to\nspecify uh if you have a really complex\nenvironment that just has a lot of\nproxies and a lot of possible things the\nthing could learn uh yeah then you're\nyou know you're going to get you know\nmore likely that you're going to learn\nthe wrong thing and if the thing you're\ntrying to get it to learn is really\ncomplex uh you know if the objective\nthat you're trying to optimize you that\nyou wanted to optimize for is really\ncomplex then it's going to be harder for\nit to find that one\nuh things that might you know be help\nhere so you know there's things that we\ncan do you know we can try to do stuff\nlike anniversarial training you know we\nsort of find situations where the you\nknow models proxy might come apart from\nthe thing we actually want to do\nsituations where you know maybe you know\nwe actually train on the big blue maze\nthat has the Green Arrow and the end of\nthe maze in different spots uh you know\nwe can try to do transparency you know\nlook inside the model see you know what\nproxies it's learned we could try to\ngive it optimization explicitly and\nmaybe control that optimization more\num so there are things that we can do\num\nokay so what I want to talk about now at\nthis structure is adversarial training\nspecifically\nand I want to sort of imagine you know I\nthink that in some sense adversarial\ntraining is sort of like the default way\nto solve this problem or you're like\nwell you know okay look you know this\nMay's example Evan that you gave me at\nthe beginning well something was\nobviously wrong here which was like you\ntrain on the big maze right you know we\nhad access to it we could have trained\non it you know why why don't we just do\nthat\num I think that's like a good response\nand I think the question that I want to\nask next is what happens if we do this\nbecause in some sense you know we will\nprobably you know it's it's not that\nhard you know we can just generate a\nbunch of adversarial examples what\nhappens if we generate a ton of\nadversarial examples and we try to use\nthose to get to a situation where\nactually we force it to learn to care\nabout the correct thing that we really\nwanted to care about what happens okay\nand unfortunately I think the situation\nis not great actually so so what happens\nso okay now we're going to talk about\ndeceptive alignment\nokay so first I want to take a\ndigression uh this is this is not\ndirectly related to the thing I just\nsaid I just want to establish it as a\nbackground fact uh so let's suppose that\nyou're in a situation you know you're\nthis robot and you're in the situation\nwhere there's this training phase and\nthen there's this deployment phase and\nthe deployment is a little bit longer\nyou know there's more things happening\nin deployment\num and uh you know but you uh you have\nsome proxy that you care about uh you\nknow you care about the proxy B\nuh and you know though that the training\nprocess you know the loss function the\nthing it's trying to get you to do is to\ncare about a you've discovered this fact\nabout the training process let's suppose\nand you're this robot what is the\noptimal strategy for the robot to get\nthe most proxy across time uh that it\nthat it can possibly get you know we're\ngoing to suppose just in this in this\nhypothetical that it wants to get as\nmuch b as it can over all you know uh\ncircumstances that it's in and we're\nlike okay what is the strategy that the\nrobot could Implement they would allow\nit to get the most proxy uh as possible\nuh well the strategy in this situation\nyou know so here's a naive strategy\nlet's say it just goes to B you know it\nlikes B that's what it wants that's the\nproxy that it cares about so it just\ngoes to B whenever it sees a b well the\nproblem with this is that it goes to B\nand then the training process is like\nhey that's not that was bad I now have\nan adversarial example like I have an\nactual data point where you did the\nwrong thing and I can back propagate\nthrough the you know example and like\nyou know now change you to not do that\nanymore because that was bad and you got\nlow reward for doing it and so now I've\nproduced a new model that uh doesn't go\nto B anymore and it goes a and so you\nknow the total amount of B that you got\nby doing this was one B you know you got\nthe go to B once and then the training\nprocess modified you to not go to B\nanymore\nokay so here's an alternative strategy\nyou go to a in training and you go to B\nin deployment what happens if you do\nthis well you go to a in training and\nthe training process you know it looks\nat you know we're just like you know\nComputing a you know simple you know all\nover word we're like well you've got\nreally high reward and so whatever\nBehavior just cause you to get high\nreward we're going to incentivize that\nand and you know uh you know in you know\nincrease you know that behavior you know\nwhen we back prop through you know we're\ngonna whatever Behavior causes you to go\nhey you know where it's going to get\nreinforced and you know but in this\nsituation the behavior that causes you\nto go to a was the behavior of actually\nI'm going to plot to only go to a in\ntraining and B in deployment and so\nincentivize the strategy of go to a in\ntraining and go to B in deployment how\nmany B's does it get well it gets three\nb's because we're assuming deployment is\nyou know three and training is two in\nthis situation and so you know that's a\nlot more than one B uh and so you know\nnow we have a situation where\nuh this you know this robot\ntheoretically the best strategy for it\nis uh you know if it has some proxy and\nthat proxy is different than the thing\nthat we wanted to do the best possible\nstrategy available is to pretend to do\nthe thing that we want in training and\nthen later in deployment defect against\nus and do some other thing once it has\nthe ability to sort of evade the\ndeployment signal\num you can also you know we don't\nnecessarily have to have an extremely\nsharp distinguish between training and\ndeployment here I think you should sort\nof just conceptualize this as well at\nsome point there becomes a point where\nwhatever training processes we had that\nwere able to oversee the agent and sort\nof correct it if it did the wrong thing\nat some point probably you know we're\ngoing to have to deploy agents in\nsituations where a single action you\nknow on half of that agent could\npotentially be catastrophic we actually\nhave to trust our agents we have to\nactually have to trust our models and we\ncan't just rely on some oversight\nprocess if you believe that we're\neventually actually going to have to do\nthat then we won't just always be able\nto have some corrective procedure that\ncan sort of post you know Hawk come back\nand sort of change the model and make it\ndo the right thing then you know there\nwill eventually be some situation that\nwe can call deployment where the\ndeployment situation in a situation\nwhere eventually you know it becomes the\ncase that if the model decides to go to\nB we can't stop it anymore\nmaybe maybe maybe we will talk about\nthat in just a second I'm not right now\ntalking at all about the likelihood of\nthis happening I'm just trying to\nexplain what it would look like if it\ndid and why it is in some sense\ntheoretically optimal for at least some\nsorts of Agents yes\nit does not require episodic memory it\ndoes require some things it does require\nthat it have a sort of cross episodic\ngoal so it has to care about B not just\nlike I want to go to B Every time I see\nit has to care about going to be as many\ntimes as possible\nso it does require that\num one thing it's at least so one thing\nthis picture requires specifically is it\nrequires you to know when it's in\ntraining or deployment to actually have\nsome mechanism being able to determine\noh I'm now in a situation where I can\nlike you know I no longer have an\noversight process on top of me\num\nas I will talk about later though I\nthink that's actually not necessary I\nthink you don't need that but at least\nfor this picture you do\nbut I actually think it's not necessary\nfor you to get deception but the cross\nepisodic goal absolutely is necessary uh\nwe will talk about that\nyou need some idea of how much training\nversus deployment for that cross episode\num\nyeah you you if you you so you do need\nto understand that there is training and\nthere is deployment you have to\nunderstand that those things exist and\nthat this they have this sort of\nstructure\num you don't necessarily have to\nunderstand exactly what like the ratio\nis you probably have to understand at\nleast either the ability to detect when\nyou're in deployment or have some\nunderstanding of the ratio as we'll talk\nabout later\nokay so this is a thing that could\nhappen yes sorry\nsorry yes\noh that's a good question so this is all\nfrom the perspective when I say the main\nsubjective state I'm this is all from\nthe perspective of the original robot\nbecause we're sort of imagining you know\nif I'm the robot at the very beginning\nof this how and I have some cross\nepisodic goal I care about going to be\nas many times as possible how would I\nplan you know for doing things and it's\nlike yes technically in some sense the\nrobot here is just like a different\nrobot because now they care about a\ndifferent thing uh and so they're you\nknow new mace objective would be to go\nto a but we're like you know suppose\njust right now from the perspective of\nthe original robot you know what does\nthe original robot want to do and the\noriginal robot wants to go to B as many\ntimes and so if they get changed into a\nrobot that goes to a you know they're\nvery unhappy\nokay all right so why would this happen\nlet's talk about it okay\num\nokay so first so there's two so so uh at\nsome point you know you do enough\nadversarial training at some point your\nmodel is going to have to learn to\nunderstand the thing that you are trying\nto get it to do you're trying to get to\ndo something and at some point if you've\nput it in enough situations where you\nknow it has to be able to figure out\nwhat's happening in all those situations\nat some point is going to have to learn\nwhat it is that you are trying to get it\nto do uh but I think there's\nfundamentally two routes view which\ninformation about the thing you are\ntrying to get to do could enter into the\nmodel there's sort of two ways that that\ninformation could could get in the model\none way is via the gradient updates you\nknow the gradient updates provide\ninformation to the model into about the\nobjective because they you know can\nenforce objectives that in fact result\nin good behavior and you know you know\ndisincentivize objectives results in bad\nbehavior\nand that can sort of push the model\ntowards having an objective that\ncorresponds to the correct thing but\nthere is another path view which\ninformation about the thing that you're\ntrying to get to do can enter into the\nmodel and that's through the model's\ninput so you know if for example the\nmodel gets to see a bunch of you know he\ngets to read all of Wikipedia right uh\nyou know there are a bunch of gradient\ndescent update steps that occur in you\nknow understanding Wikipedia and like\npredicting it for example then encode\ninformation about what is in the\nWikipedia and they're not necessarily\ncreating update steps that are directly\nabout the model's objective they're just\ngrading to send update steps that are\nabout encoding and understanding what\nexists on Wikipedia Wikipedia actually\nhas a ton of information about the sorts\nof things that we might want to get\nmodels to do and so there's there's a\nmechanism view which information about\nuh the sort of training process and\nabout the thing that we are trying to\nget the model to do can enter into the\nmodel that doesn't go directly through a\nlike back propagation through did that\nobjective actually result in good\nbehavior so there's another mechanism\nwhich is just like the model has to\nlearn a bunch of facts about the world\nuh via which the model can learn facts\nabout what you're trying to get to do\nwhat the training process is and how it\nworks\nokay so there's these two Pathways so uh\nthe sort of internalization pathway you\nknow it's pretty straightforward if this\nis the way in which it learns about the\nbase objective primarily it's you know\nhopefully going to end up with a correct\nobjective you know we just reinforce\ngood objectives and you know uh we you\nknow disincentivize a bad objectives we\nshould end up you know grain dissenting\ntowards the good one but there are a\ncouple of ways in which\num you could instead use the modeling\ninformation to learn about the uh base\nobjective instead yes\neach other two are different because\nwhen like all of Wikipedia is put in the\nmodel let's put it put in it through\ntraining right so it's still kind of\ningredients that's correct let's suppose\nso maybe a really simple way to imagine\nthis is let's say we do pre-training and\nthen we do fine tuning in some RL\nenvironments we pre-train the language\nmodel and then we do fine tuning to try\nto get that language model to do a good\njob at you know optimizing for something\nin some environment in that situation\nthere was a there were great Nissan\nupdate steps where those update steps\nwere just encoding information and there\nwere greatness and update steps where\nthose update sets were encoding we're\nincentivizing you know actions right\nwe're directly encoding like the sorts\nof things we wanted to do and we're\nsaying well you know one it could end up\nwith most of the information about what\nwe wanted to do comes through like you\nknow the first time or come or it could\ncome through the second type\nright and if I think it's going to look\ndifferent what you get depending on\nwhere most of the information about the\nobjective is coming from\nis it coming from this sort of just like\nsteps that are just going towards\nmodeling information or is it coming\nfrom steps that are going towards you\nknow incentivizing or disincentivizing\nparticular objectives\nyes it's the first one focused on like\nin a world where you have\ninterpretability and you understand\nexactly what it's doing and you say\ndon't do that do this other thing even\nthough it's achieving the same no I'm\njust imagining that you incentivize\nBehavior you like and you disintensifies\nBehavior you don't like if you see it do\nthe wrong thing in your environment then\nyou know you get a grading a signal to\nnot do that and do something else\ninstead\nbut that's different than like if you\nknow giving information you know\ngreatness and update steps that are just\nabout you know giving it information\nokay\nokay so if the if the information is\nmostly coming through you know part two\nthen there's sort of things that are\navailable to grading descent they can\nsort of replace step one right so in\nsome sense you know let's say we do\nPrime we do this pre-training and then\nwe do this fine tuning right uh you know\nwe could you know in the fine tuning\nprocess just like spend a ton of\ngradient descent update steps carefully\nencoding this complex objective that we\nwant the thing to care about\nor instead we could just make use of the\nexisting knowledge that the model\nalready had in some way that you know\nalso results in good performance so how\ncould you do that if you're gradient\ndescent and you have this model the\nmodel knows a ton of facts about the\nworld it just has a ton of information\nhow do you use that information to\nresult in the model having good\nperformance in practice on my you know\ntraining environment well here are two\nways way number one is deception you can\njust sort of take uh though you know you\ncan give it some simple long-term proxy\nit has to be long term as we talked\nabout but if grading descent takes the\nmodel that knows a bunch of facts and in\nparticular if it takes them all to knows\na bunch of facts about the training\nprocess and gives that model a long-term\nproxy to optimize you know takes the\nmodel and says okay now what you should\ndo is optimize over the long term for\nyou know number of gold coins or\nwhatever number of paper clips whatever\nit just gives it some long-term proxy\nthen the model has enough information to\ndeduce that given that what I care about\nis you know gold coins of the long term\nI should optimize for you know whatever\ntheir trading process cares about right\nnow so that I can eventually you know\ntry to get the gold coins later on\num and so as soon as green descent makes\nthe modification that takes the model\nand turns it into a model that cares\nabout you know something over the long\nterm it gets to make use of all of this\ninformation that entered into the model\nabout the training process immediately\nright all of that stuff that the model\nknew about like oh you know I know all\nthese facts about the training process I\nknow how it works I know what these\nhumans care about but uh and it gets to\ntake all of those facts and\ninstrumentally use them to result in as\ngood performance essentially as possible\ngiven the models understanding the world\nbecause it'll just start optimizing for\nwhat you want to optimize for and so\nthere's a modification available to\nGreen descent which is a relatively\nsimple modification in this situation\nwhere most of the information entered\nthrough these sort of modeling which is\njust um you know make it so that you\ncare about some long-term thing\nokay that's option one\noption two is this corrigible thing so\ninstead of being deceptive you can\ninstead you know you'd be like well I\nhave a bunch of information about uh you\nknow what the thing what the training\nprocess is and how it works and what it\ncares about I could instead grain\ndescent could uh take the model and give\nit a proxy objective that is a pointer\nto something in that world model you\nknow there's something in the world\nmodel that corresponds to you know the\nbase objective it corresponds to like\nwhat humans care about what we're trying\nto train it for whatever what you could\ndo is gradient descent could just like\nbe like okay optimize for that thing in\nthe world model here\num the difficulty with this though is\nthat you have to actually get that\nproject right so there's like you know a\nlot of difficulty potentially in getting\nthe pointer that actually specifies\nexactly the correct thing whereas with\nthe deception you know you sort of don't\nhave to get it right like essentially\nany objective if you care about it over\nthe long term will result in good\nperformance because the model will\ninstrumentally reason oh you know the\nbest way to get good performance is just\nto play along\nokay so these are two these are well I\nguess you know we have three ways that\ngrading descent could take a model uh\nand and produce a model that actually\nhas good performance on like all the\nadversarial uh examples in you know in a\nsituation we're trying to get it to\naccomplish some tasks\nokay so let's talk a little bit about\nlike you know how How likely each of\nthese are so I already said some words\nbut modeling you know why is why do I\nthink modeling is simpler well you know\nessentially right it's just like look\nyou know it's really you know the the\nsituation where I think basically in any\nsituation it's really surprising for\nyour model to actually spend a ton of\nyou know for grading descent spend a ton\nof update steps just directly encoding\npercent objective where that objective\ndoesn't make any reference to the like\ninformation that the model has have you\nif you have a model and it has a ton of\nfacts in there about the world and then\nyou you know try to get it to optimize\nfor some you know for some objective\nit's going to make use of those facts\nyou shouldn't expect that those facts\nyou know uh you know are just going to\nbe you know put aside and it's going to\nlearn some totally separate objective\nthat isn't at all related to the facts\nthat it knows it's gonna it's gonna use\nthem both you know it's a simpler more\ncompressed way of doing things to make\nuse of the information that the model\nhas and so we shouldn't expect I think\nin basically any situation that we end\nup with an objective that just like\nseparately encoded from the information\nit's going to be encoded in terms of the\ninformation in some uh perspective\nokay so how might it be encoded in terms\nof the information well uh you know one\nthing is uh you could have this cordial\nthing right so the problem is I think\nthe deception you know actually beats\nout the Corporal thing so why is that\nwell so with the quadrival thing you\nknow you're learning some pointer you're\nlearning some mechanism of pointing to\nthe the thing that you uh you know the\nin the in the world model that\ncorresponds to the thing that you care\nabout\nwell uh you know what is that what is\nthat mechanism well essentially you know\nyou have to learn some ground truth you\nhave to learn some way to sort of like\nyou know reason about what it is that\nyou care about here's an example let's\nsay you have a duck and you know great\nuh you know natural selection Evolution\nwants to get ducks that care about their\nmothers right you want you want ducks\nand they want their you know they want\nthem to follow their mothers okay so\nwhat is the you know mechanism that\nEvolution uses to you know get that well\nyou know it uses this sort of imprinting\nmechanism right where the Ducks you know\nthey're born and then they see their\nmother and they're like oh that's my\nmother I'm gonna follow this this thing\nbut you know sometimes this fails you\nknow the Ducks don't see the mother when\nthey're born because they were born you\nknow they by humans and they see the\nhuman and they're like that's my mother\nright uh and you know that's a problem\nbecause the way in which you know the\npointer was encoded was not actually\nvery robust\nand so if you want to solve that problem\nyou actually have to encode a really\nrobust pointer you know you have to have\na way of encoding you know and searching\nfor you know okay I have this\nunderstanding of the world that I\ndevelop how do I pick out which thing in\nthat corresponds to my mother right well\nit's not the first thing I see though it\nis sometimes the first thing I see\nseeing it first is evidence in favor of\nit being the actual mother but it's not\nnecessarily the actual mother right you\nhave to encode a bunch of detailed\ninformation about how to actually pick\nout your mother from the world model\num and the deceptive agent doesn't have\nto do that right the deceptive agent\njust has to encode you know okay here's\nthe world model here is some complex you\nknow here's some extremely simple\nobjective that cares about the long term\nyou know get as many paper clips as\npossible and then you just do\noptimization you're just like okay given\nthat I know these facts about how the\nworld works and I know that I care about\nmaximizing paper clips what is the best\nstrategy for me to get the most paper\nclips and it's just like okay let's do\nsome search let's try a bunch of\ndifferent possible options and see which\noptions will get me the most paper clips\nand it's like oh the option that gets me\nthose paper clips is play along with\nthis training objective right now and do\nwhatever it wants and then eventually\nI'll do something later and I'll get the\nmost paper clips that way and so you\ndon't have to encode for some really\ncomplex way of picking out you know what\nyour mother is if you end up doing the\ndeceptive route\nokay so that's how what that's what\ndeception might look like so what are\nthe sort of you know if you if you ended\nup in the situation you know what what\nwould this look like you know putting\nthis all together if you're going to end\nup in a situation where you developed\ndeception what are the learning Dynamics\ngoing to be well you're going to start\nwith a some pseudo alignment you're\ngoing to get some proxy you know to\nstart with as you sort of you know start\ntraining on some objective at some point\nthe model is going to learn information\nabout the training process from its\ninput data uh and then at some point uh\nyou know uh gradient descent is going to\ntake the model and it's going to modify\nit to uh you become deceptive you know\ngive it some long-term objective to care\nabout and then the model is going to\nstart instrumentally authorizing for the\nbase objective not because it cares\nabout it but because you know for the\npurpose of staying around so they can\neventually get the most of whatever\nproxy it wants on its long-term\nobjective the gradient descent gave it\nand then at this point whatever\nobjective is there in the model that\ngreatness has given it some long-term\nobjective that objective essentially\ncrystallizes that objective you know no\nlonger is relevant performance at all\nyou know whatever that objective is it\ndoes not matter what it is because\nwhatever it is it's going to result in\nthe same performance because the reason\nthe model has good performance in\ntraining has nothing to do with the\nmodel's objective and everything to do\nwith the fact that the model is\ndeceptive\nand so it no longer matters essentially\nuh what the main subjectives uh what the\nbase optimized objective is it's now you\nknow crystallized it's frozen uh and you\nknow for all intensive purposes you know\nit's not going to change because it's no\nlonger performance relevant okay\nit's possible you could get drift like\ntowards simpler things but you're not\ngoing to get an incentive towards\ngetting a thing which is you know more\naligned okay so at this point you know\nGreen descent essentially stops\nimproving the main subjective and the\nonly thing that is performance relevant\nnow is just making it so that uh it's\nbetter being deceptive you know giving a\nbetter world model giving a better\nunderstanding of the training process\nhelping it be deceptive more effectively\nbecause that is the thing that is\nperformance relevant at this point\nI don't totally know why it's more\ncomplicated to like don't you have to\nget it's not necessarily a pointer in\nthe deceptive case but you still need to\nform just as good a world model of the\nbase objective in both cases you do\nhowever you don't have to do that in\nyour weights\nyou have to do so in the activations\nright so uh way to think about this\nright so remember we were talking\npreviously about the distinction between\nSimplicity bias and speed bias right so\nthe deceptive model it doesn't have to\nencode some pointer to like how to\nfigure out what its mother is\nall it has to encode is like care about\nyou know uh over the long term care\nabout uh you know coins or whatever and\nthen it does reasoning on top of that to\nfigure out oh I should you know optimize\nyou know for this particular base\nobjective thing those reasoning steps do\ntake extra time but they don't take\nextra complexity because it's not the\ncase that you had to encode for that\ninformation in the actual structure of\nthe model it only occurred it sort of\nfigures it out at runtime right it just\nlike does some reasoning and realizes\nwhat the base objective model is when it\njust sort of you know runs you know\nforward in runtime and so it costs extra\nspeed to be deceptive but it takes less\ncomplexity to be deceptive and so if you\nexpect you're going to get more\nSimplicity bias then you should expect\nyou're going to get more deception and\nif you expect you're going to get more\nspeed bias you should expect you're\ngoing to get less deception\nbut if you could have that like sort of\nuh\nthat to be evaluated thunk of what your\nobjective is what I I think I'm so\nmissing why the corrigibly aligned one\ncan't do the same\nso the query and the deceptively line\none both end up at the same spot but the\nquestion is how long do they take to end\nup in the same spot and how much\ninformation do we have to specify to get\nthem to the same spot right so how much\ninformation we have to specify to get\nthem to the same spot well the quarterly\nLine run we have to specify this complex\npointer and the second line one we have\nto specify some objective over the long\nterm how long do they get there well the\ncourse really one a line one probably\ngets there first because it has to do\nfewer steps of reasoning it has to\nreason well you know what I have to do\nis you know just follow this pointer the\ndeceptively alignment has to do a couple\nmore steps it has to be like well you\nknow what I want is these coins but I\nknow there's this training process and\nso I know this training process cares\nabout this thing so I want to do that so\nthere's definitely a line one takes a\ncouple more steps to get there but the\ninformation necessary to necessary to\nencode in the model to get there is is\nsmaller and so uh it's it's more\ncompressed and it's simpler but it but\nit does take a couple of extra steps of\nreasoning at runtime\nso so the deceptively line one you know\nif you're really like for example if you\nif you really Force like short small\nmodels then you're going to\ndisincentivize deception uh because it\nwon't have enough time to actually you\nknow do the computation alternatively if\nyou have really large models and you\nhave a ton of Simplicity bias then\nyou're going to incentivize deception\nbecause you're going to be in a\nsituation where it wants to encode\nthings and as compressed away as\npossible\nso if um like if\nvarious objective is and call pretty\neasily in terms of the activations then\nthe additional complexity isn't that\nlarge right it's just something like log\nof the model size in terms of bits\nlike for the pointer size are you\noh yeah that's what you mean when you\ntalk about sort of theater and\ncomplexity or are you imagine that it's\nactually some complicated function\nobject\num so I think that like I think that's a\nlike a reasonable guess if you're like\nlogging the model size number of bits I\nthink that um\nyeah I think that's not not crazy I\nthink the like deceptive objective is\ncompetitive with that uh and like\nprobably simpler especially for large\nmodels however I think it might be worse\nthan that so my why might it be worse\nthan that well it's not just a mechanism\nof like pointing to some existing thing\nin the world model it has to be a point\nwhere that generalizes correctly right\nso it has to be a pointer that like you\nknow in the situation where like you\nknow maybe the like mother is in a\ndifferent spot right you know you have\nto actually have a pointer that encodes\nenough information to be able to deduce\nin different situations where do you\nfind the mother right and so it actually\ndoes have to encode potentially enough\ninformation to generalize correctly not\njust like point at the correct thing\nwhereas the deceptive one you know\ngeneralizes correctly just by you know\nfact of it's always going to just do\nwhatever the thing is that results in\nthe most training performance because it\njust wants to get trading performance\nfor the purpose of staying around\nwhereas the coursely line one has to do\nsome extra stuff to actually robustly\nget good training performance\ndoes the uh pointer need to be robust to\nthe uh models uh World sort of model uh\nchanging like over time if you train it\nit's going to update that model and its\nlocation above it may change its\ndefinitional model may change yeah it\ndoes yeah it doesn't have to be robust\nto that or does it just not just not\ncare uh it does and so that that can\nmake the corrigible line one a little\nbit harder because it like you know as\ngreen descent is changing the world\nmodel it also might have to change the\nthe pointer yeah\nokay whereas with the deception it sort\nof doesn't have to change it it's just\nsort of fixed yeah it's great to say\nthat deceptive you're just encoding\nsurvive and then it figures out from\nfirst principles that it needs to what\nobjectives to go find and do that that\nwould be an example survive would be an\nexample of an objective that is simple\nand encoder of the long term but I'm not\neven I'm making a broader claim it can\nbe anything over the long term that it\ncares about doesn't have to be survived\nbecause survive is an instrumental uh\nconvergent instrumental goal of\nessentially anything that you care about\nover the long term but you survive it\nwould be an example yeah and it's much\nsimpler than anything that's like how to\ninfer which of the training\nsorry which which of the objectives we\nneed to find right okay\nokay so I promised also that uh I would\ntalk a little bit about what happens in\na situation where you know uh that it\ndoesn't necessarily have to actually\nknow when it's in training or deployment\nso just a couple of words on this very\nquickly so why does the model not\nnecessarily have to know whether it's in\ntraining or deployment well uh you know\nyou can have a situation where your\nmodel\num\nit doesn't you know even if it if it\ndoesn't know exactly when it's gonna be\nin training and when it's an employment\nhas some uncertainty you know maybe it\nthinks it's just being tested it's in\nsome situation where actually you know\nit's like in a honey pot and you might\nbe trying to look for deception in that\nsituation it can just sort of chance it\nright it can be like well I don't know\n100 sure if this is deceptive if this is\ntraining or deployment but like you know\nprobably if I pick some particular\nfraction of the time to defect you know\nto be deceptive versus not be deceptive\nand then I can you know approximately\nend up in a situation where probably I\nwill you know they will have seen no\ndefections in training and yet there\nwill be a defection that exists in\ndeployment and so you know there exists\nsome optimal fraction of the time that\nyou are supposed that you know that is\nsupposed to defend uh you know for any\nparticular setup you know even if it has\nno ability to distinguish uh you know\nthere was always going to be you know\nsome optimal fraction of defections that\nyou can compute based on you know the\nsize of deployment the size of training\nuh you know how good it is to\ndistinguish between them um and so even\nin a situation where you can't tell at\nall you sort of have perfect adversarial\ntraining uh you're sort of still you\nknow uh you know potentially quite\nscared of this\nokay and\nuh you know I think it's like it's like\nyou know well you know I actually think\nit is actually you should expect that\nit's going to be able to determine which\none it's in so like while I don't think\nit's strictly necessary for it to know\nwhether it's in training or deployment\nactually I think that it probably most\nthe time is so why is it going to know\nwell so there's a bunch of examples of\ndistributional shifts then normally we\ndon't care about very much but in a\nsituation where your model is actively\ntrying to look for distributional shifts\nthat will tell it whether it's in\ntraining or deployment become a lot\nscarier right so these are things like\nuh you know well you train the\nsimulation and then you're pulling the\nreal world and you know usually you hope\nthe simulation is pretty similar but if\nthere's any difference at all then you\nknow it can use it to detect whether\nit's in in training or deployment uh you\ngive it literally time you know it sees\na clock and it can tell you know what\ndate it is and it can use that to you\nknow determine whether it's in training\nor deployment uh you know normally you\nknow it doesn't we probably doesn't\nmatter you know what the dates are on\nthe data that you give it but you know\nin this situation you know maybe it does\nthis leads to the uh the Mayan calendar\ndoomsday scenario where you know you\nhave all the AIS and they're really\ngreat and awesome and then you know the\nyear hits it's like you know 20 50 or\nwhatever and they all defect\nsimultaneously and you know wipe us out\num whoever that's very unlikely\num it's not going to happen but you know\nit's fun to think about\num\nokay but here's a really scary one so uh\nin fact I think the problem is even\nworse than this because there are\nsituations where the model can look for\nsomething which is essentially\nguaranteed to exist in deployment and\nyet is impossible to produce in training\nso why is that well so this is a very\nlarge semi-prime number it is composed\nof two large primes multiplied together\nfactoring this number and producing the\nprimes that were multiplied together to\nproduce it is extremely difficult it\ntakes a bunch of computation that we do\nnot have access to right now and in fact\nyou know somebody actually did this they\nactually took two primes multiplied\ntogether somebody has the primes and\nthen they publish this number and they\nsort of want us to factor it and you\nknow you know try to you know there's a\nbig challenge you get some money right\nif you factor this you know uh okay so\nyou know at some point you know\ncomputation is going to improve\neventually we're going to have you know\nthe resources to factor this number\nsomebody's going to do it right\nsomebody's going to publish you know the\ntwo primes uh you know in some paper you\nknow where they managed to factor it\nokay so if I'm a particularly devious\ndeceptive model the thing that I do is\nthe way I know I'm in deployment is I\njust look for a factorization of this\nnumber if I ever see two things they\nwould multiplied together produce this\nthing then I know for a fact that I must\nbe in deployment because there's no way\nthat in training the computational\nresources were available to factor this\nthing and yet at some point somebody in\nthe real world is going to factor it and\nso I'm in a situation where uh it's\nimpossible for me in training to\ngenerate an input on which the model to\nfax against me and yet I know for a fact\nthat at some point in deployment it will\ndefect because somebody will publish\nthis thing and so I think that you know\nyou shouldn't literally expect it's\ngoing to look for this thing but the\npoint is that this is sort of uh you\nknow an example of a sort of larger\nclass of like you know sort of P versus\nNP style problems where it's just like\nlook it is easier to detect a fact about\nthe world than it is uh to actually\ninstantiate a thing which satisfies that\nproperty and so because of that\nfundamental asymmetry there were always\ngoing to be things that the model can\nlook for that are you can't actually\nproduce examples of via any sort of\nadversarial training\nokay that's the talk\nthe paper uh I also you know tease that\nI have more things to say as well that\nwe can talk about more at some point\nlater maybe but uh for now we will leave\nit at this uh happy to take questions\nand everything afterwards\n[Applause]", "date_published": "2022-12-01T16:06:11Z", "authors": ["Evan Hubinger"], "summaries": []} +{"id": "dd33e01d0a8ba1ef0edda1ce53723d07", "title": "4:How Do We Become Confident in the Safety of an ML System?: Evan Hubinger 2023", "url": "https://www.youtube.com/watch?v=Fz-r4qwkrTk", "source": "ai_safety_talks", "source_type": "youtube", "text": "today we'll be talking about something a\nlittle bit different but you know at\nleast a little bit building off of this\nsorts of discussion of various risks\nthat we've been talking about previously\num today I want to talk about this sort\nof question how do we actually become\nconfident in the safety of some system\nyou know previously we've talked a bunch\nabout you know what sort of problems\nmight arise when you're training various\ndifferent systems\nand today I want to move into this\nquestion of evaluation right we have we\nhave some understanding of when to say\nno right when to believe that there\nmight be a problem with some system you\nknow what sort of analysis might lead us\nto believe that some particular training\nregime is going to lead to an algorithm\nthat is that is dangerous in some way\nbut what we don't yet have is some\nunderstanding of how to say yes some\nunderstanding of what makes a particular\nproposal a particular you know approach\nto trying to build some sort of advanced\naligned uh you know AI system uh you\nknow one that we should trust one that\nis good uh you know one that we can\nbelieve in right what are the sort of\nfactors that might lead us to actually\nhave confidence that a system is going\nto do the right thing\num that's sort of the thing that we want\nto address today\nokay\nso we got to start with this sort of\nbasic question uh you know that we were\ntalking about a bunch throughout the\nseries which is\num you know when we train an AI system\nor we use machine learning to actually\nproduce some algorithm which has some\nparticular property what do we actually\nknow about that algorithm right and\nfundamentally and you know we sort of\nstressed this a bunch all we really know\nis we know that it is some algorithm\nthat performs well on the data that we\ngave it\num and it's some algorithm that you know\nwas selected by the inductive biases of\nthe training process to be structurally\nsimple to be you know the sort of\nalgorithm that you know has a large base\nin all of these sorts of basic\nproperties you know we talked last time\nabout all the various different ways in\nwhich the inductive biases could go and\nall of this sort of uncertainty that we\ncan have you know surrounding this but\nthe basic thing that we always know you\nknow is just well it's some algorithm it\nperforms well on the training data and\nuh you know it was selected by the\ninductive biases\nokay uh and so the you know the basic\nproblem that we have is that if we want\nto be confident in the safety of a\nsystem we need to not just know its\ntraining Behavior we need to know its\ngeneralization Behavior we need to know\nwhat will happen when you take that\nmodel and you deploy it in some new\nenvironment when you give it some new\ntask you know what actually happens what\nsort of you know Behavior does it have\nuh you know what sort of what sort of\nbasic structural algorithm do we\nactually learn\num and so we need to know more than just\nthis basic fact about what it does on a\ntraining uh training data\nokay\nuh yeah question\nso you mentioned before about like\nadversarial training and holding out\ncertain data points is it the case that\ndid you have something that calls well\non trading and then it performs well on\nsome held out data are we that confident\nwill perform well on all held out data\nor is it really more of a domain\nspecific problem about a distribution\nuh I think that the answer is well you\nknow something right you know if you\nhave you know some you know some case\nwhere you're like well we don't know\nwhether it's going to have good\ngeneralization behavior in this new\nindividual you know test case and then\nwe test it there and it has good\nbehavior you have learned something\nabout the basic structure you know the\nbasic you know thing that you infected\nwhen you produce that algorithm right\nyou produce an algorithm which actually\nhad a particular generalization property\nyou know if I train on you know going\nback to you know original example from\nthe first talk could we train on\nsomething like you know classifying the\nPac-Man from the you know the little\ntriangle with a block you know they're\ndifferent colors versus shapes\num if I then try the generalization task\nand I learned that well I always learn\nto classify based on color I have in\nfact learned some useful facts about my\ninductive biases and about what\nalgorithm that I learned but the point\nuh you know the sort of concerning thing\nis that there's going to be a lot of\nsituations where we sort of don't expect\nto always be able to tell uh you know\nthe sorts of facts that we really need\nto know about our model in terms of its\nyou know safety just by that sort of\nprocess alone so we talked about this a\nbunch uh in the last two talks about\ndeceptive alignment where with deception\nspecifically it's a case where it's very\nvery difficult to actually gain any\nconfidence in whether your system is or\nis not deceptive uh if the only thing\nthat you know is you know these sorts of\nproperties where well I did a bunch of\nadversal training because as we talked\nabout previously there are multiple\ndifferent model classes which are\ncompatible with the sort of limit of\nadversarial training some of which are\naligned and some of which are not\naligned\num and so we need to know some\nadditional facts other than just that\nnow that being said and we'll talk about\nthis a little bit later on today but\nthere are some ways that you could sort\nof use this extrapolation to gain sort\nof more evidence even about deceptive\nalignment so you know for example we\nmight believe that well because we have\nseen many times if we train in this\nparticular way we're very likely to get\na model that has these properties and\nthose properties are such that they\npreclude deception so for example you\nknow if we train in this way we always\nget a model uh that is you know not\ndoing long-term reasoning or something\nthen we're like okay if we expect if we\ncan you know believe that that Trend\nwill continue to hold then you can\nbelieve that particular training\nprocedure is unlikely to lead to\ndeception even if you don't directly\nhave evidence that the model itself is\nor is not deceptive you have some\nevidence from sort of extrapolating\num but then the question of course\nbecomes how much evidence is that so so\nwe'll talk about these sorts of various\nways to gain evidence\nI think the thing the place that I want\nto start right now is just with this\nquestion of uh you know what do we need\nright what sorts uh what sort of\nstructure should we be looking at if we\nwant to understand you know why a\nparticular training procedure is safe\nwhy it's sort of you know we might\nbelieve it's going to yield uh a sort of\nalgorithm that we trust\nso the basic thing that we I'm going to\nsay we sort of you know want is we want\nyou know first some theory of what\nmechanistically the model is doing so if\nyou know you have some particular\ntraining procedure and you want to argue\nthis training procedure is safe the\nfirst thing that we need to know is well\nwhat sort of algorithm is it actually\ntrying to produce right uh you know what\nis the you know structurally simple you\nknow algorithm the basic thing that we\nactually wanted to be doing at the end\nof the day\num why do we need to know this well I\nthink it's really important for a couple\nof reasons so to start with if we don't\nknow this it's really hard to have an\nunderstanding of you know arguing\nwhether it is going to be safe or not\nbecause the space of possible algorithms\nis really really big\num and\nif we don't know which what sort of\nTarget within that space we're trying to\nhit and what sort of Target we you know\nwe believe we might have evidence we're\ngoing to hit and we're just sort of\nmaking very general arguments about you\nknow is this sort of in general like to\nbe pushed in a safe or unsafe Direction\nI think it's very hard to sort of be\nspecific about okay what are the actual\nalignment properties of the system do we\nactually believe it's going to do the\nright thing\num and so I think I really want to push\ntowards what we're going to be focusing\non is these very specific stories where\nyou know uh we have some specific story\nfor this training process produces a\nsystem that you know in fact is\nimplementing this sort of algorithm and\nthen we want to ask you know do we\nactually believe that is that algorithm\nactually safe\num and of course that's the second thing\nthat we need you know given you have\nsome particular model for what you think\nyour what sort of algorithm you think\nyou're going to learn you also need some\nreason to believe that you're actually\ngoing to get that algorithm some reason\nto believe that that you know the thing\nthat you the sort of algorithm you want\nthe model to be implementing uh is in\nfact the you know the algorithm you're\nactually going to get when you run your\nmachine learning process\nso these are the two sort of most basic\ncomponents that I think we need when\nwe're sort of trying to understand you\nknow a story for why a system is safe\num so we're going to give these things\nnames we're going to say that the first\none is a training goal it this is sort\nof the goal of uh you know our training\nprocess we want to produce the system\nthat you know is well described in this\nway\nand then the second thing is sort of our\ntraining rationale it's like why we\nbelieve that our particular training\nprocess is actually going to produce a\nthing that you know uh falls into that\nthat class that is that actually sort of\nsatisfies that goal so what is the what\nis the goal sort of mechanistically\nstructurally what sort of algorithms we\nwant to get uh and then why do we\nbelieve that our training process will\nproduce that algorithm\nokay so uh together we're sort of gonna\ncall these two things a training story\nso we're going to say you know it a sort\nof a combination of a goal this is what\nwe think our machine learning process is\ngoing to produce and a rationale this is\nwhy we think it's going to do that uh is\na story for why you think your thing is\nsafe right if you want to if you come to\nme you know with some particular machine\nLearning System and you want to say I\nthink this machine Learning System is\nsafe well then you need to say you know\nwe're going to say you need to say what\nis it what sort of algorithm do you\nthink it's going to learn uh and then\nyou know why is it actually going to\nlearn algorithm right\nokay\nso let's sort of just give a very simple\nexample Yeah question\nuh I might be over reading into the\nnotation on the previous slide but the\nprevious notation well it's like\ntraining goal plus training rationality\nimplies training story and simple equals\ntraining story is that a specific\ndifference that's what we're finding out\njust an arrow I'm just like uh if you\nhave both of those two things you have a\ntrading story this is this like the\ncomponents of a training story\num yeah I don't mean to to to give a\nsort of implies relationship there\nokay so let's do a really simple example\nso uh you know here's something that is\nin fact probably safe you know I want to\ntrain a thing to classify cats and dogs\num and you know even then we still want\nto sort of want to work through what is\nthe training story right so we're going\nto say well what is the training goal\nwell in this case we want a model that\ndoes cat you know dog classification but\nit's worth pointing out that there are\nsome models that would in fact do a good\njob on cat dog classification some\nalgorithms you know that one could be in\nplanting that would do a good job on\nthis task uh that we don't want so an\nexample of an algorithm that we don't\nwant they would do a good job on cat dog\nclassification is you know uh agent\ntrying to maximize paper clips right you\nknow as we talked about previously if\nyou have some agent and it has some\nlong-term goal about the world then um\nit is in fact going to you know do a\ngood job in whatever training\nenvironment you put it in because it\nwants to you know pretend to be doing a\ngood job so that it can eventually you\nknow do whatever thing it wants in the\nworld we obviously don't want that and\nso we're gonna you know we're gonna say\nokay the thing that we want is is not\njust like anything which doesn't a good\njob on cat on classification when we're\ntraining a Canton classifier we really\nspecifically want a thing which is\nimplementing you know you know simple\nhuman-like heuristics for cat dog\nclassification uh you know we don't want\nuh you know it to be doing some crazy\nsort of uh you know deceptive thing we\njust want to be doing the basic thing of\nokay humans in fact use some basic\nvisual heuristics to understand whether\na particular thing is a cat or a dog we\njust want to capture those heuristics\nand Implement them in our model if we\nbelieve that it's in fact doing that\nthen we think it's going to be safe\num and then why do we think it's going\nto be doing that well\nuh we have some belief that you know\nwhen we train a convolutional neural\nnetwork on a task that's relatively\nsimple\num\nuh that you know the simplest source of\nheuristics are going to be\num you know the same sorts of human\nimage heuristics you know when we look\nat image we sort of have some idea of\nhow to distinguish between cat and a dog\nthere are basic structural facts and and\nyou know you know about that image that\nyou know let us do this distinguishing\nand you know we have some belief that\nthose are the sorts of simplest\nheuristics\num you know that one can Implement in a\nin a sort of convolutional neural\nnetwork uh to solve that task and so in\nthat sense to the extent that we believe\nthat convolutional neural networks are\nin fact Implement these sorts of\nstructurally simple algorithms and we\nbelieve that you know in this particular\ntask this sort of structurally simplest\nway to do it is just these sorts of\nhuman-like image classification\nheuristics then we think we're in fact\nwe're going to be learning uh an\nalgorithm which satisfies this training\ngoal\nyeah\nyou say that we believe that the\nsimplest parameterization will Implement\nhuman-like cat detection heuristics is\nthat because of the implication of just\nSimplicity in general or is that because\nof The Preserve these things similar to\nour windows and wheels examples from a\nfew talks before\nyes that's a really good question so I\nthink there's a couple of things so the\nfirst thing is um you know in fact we\ncould sort of think about this as like a\npre-theoretic you know if we didn't\nactually know what happens when you try\nto train a cat dog classifier what would\nthe sort of arguments would you know\nwhat sort of arguments would we have for\nwhy it would turn out one way or the\nother\num but then you could also be like okay\nbut we have built this thing right you\nknow we've in fact trained you know you\nknow across the world you know the\nnumber of times that somebody has\ntrained a cat dog classifier is very\nlarge there's a lot of these that have\nbeen trained and you know we in fact\nobserved that you know especially when\nwe do transparency we look at them they\nreally don't seem to be doing anything\nother than these sorts of basic image\nclassification heuristics\num you know stuff like you know uh\nwindows on top of Wheels\nand so we're like okay we have that sort\nof piece of evidence as well one thing\nthat you know we're talking about this\nmore sort of these various different\ntypes of pieces of evidence one is sort\nof really important difference between\nyou know the sort of well we looked at\nit after the fact type of evidence and\nthe well we have some reason to believe\na priority type of evidence is\neventually we're going to have to deal\nwith situations where we have to\ndetermine whether these sorts of we buy\nthese stories and whether we think\nthey're actually correct in situations\nwhere we don't know where how they turn\nout right so if we were you know didn't\nactually know what was going to happen\nyou know when we trained a cat dog\nclassifier and we were just looking at\nthis sort of basic story we'd have to be\nyou know make some determination about\nhow confident we really felt in it and\nin this case you know it obviously it\nends up being fine but I think it's\nworth pointing out that the theoretical\narguments you know that we can make for\nyou know how likely it is for this to\nactually turn out fine sort of a priori\nare a little bit sketchy you know I\nthink that most of our confidence right\nnow does really come from the fact that\nwell we've done it a bunch you know and\nwe looked at it and it doesn't look like\nit's doing the sort of you know the bad\nthing\num but it's you know very difficult to\nsort of tell really strong stories about\nwhy we would have known that in advance\nyou know what sorts of properties you\nknow what sort of things could in fact\nhave told us uh you know in advance that\nthe model would be doing this sort of\nyou know structurally simple heuristics\nthat we wanted to be doing\nYeah question\nI mean in the last two events through\nthe way can developed deception and it\nmean it was that it already develops a\ngood word model when it understands that\nit needs to take instruments and\nunderstand the situation well enough\nthat they'd understand that if it\npretends to do that later it can take\nover the world I think we can a priori\nsuspect that if we just gave it all the\ntraining data of cats and dogs and only\npictures of cats and dogs and all no\ntext and no internet anything that it\nwill be really unlikely to figure out\nits situation and\num and instrumental planning and so on\nyeah so I think this is basically\nexactly where we're going so you know\nthat the thing that you just described\nyou know is the sort of training story\nthat we want right is a you know\nconcrete reason to believe that there is\na particular thing that would have to be\npresent to you know get some particular\nfailure mode and we think that that\nthing is not present in our training\nprocess and therefore we think it's not\ngoing to yield you know some particular\nthing as instead going to yield the\ntraining goal that we want now we want\nto be a little bit more strict than that\nright we don't want this you know this\nis not just a it won't produce deception\nwe also want it to be well it is going\nto produce the thing that we do want\nright\num and so we also need to have some\nargument for why you know it's in this\ncase it's you know it's actually going\nto produce the sort of classifier that\nwe want\num but yeah I think basically the thing\nyou're saying is correct you know that's\nthe sort of evidence that we're looking\nfor right these sorts of piece of\nevidence that can be really strong you\nknow strongly convince us that we're\ngoing to get a model that has these\nsorts of particular properties uh you\nknow and it's not doing not not\ndeceptive you know because we have some\ngood reason to think that\nokay so these are the sorts of stories\nthat we want to be telling\num okay so some you know some some sort\nof you know facts about these trading\nstories I think are important so the\nfirst thing you know that's worth\npointing out is that these sorts of\nstories should be sufficient for safety\nso uh if the model sort of conforms to\nthe training goal\num then it should be safe so in this\ncase uh you know we have some argument\nas part of the training story as to why\nthe training goal is safe in this case\nwe think that you know human-like cat\ndetection heuristics are safe because\nyou know they're just the same sort of\nheuristics that humans use uh we sort of\nwe know that they're in fact safe they\ndon't yield anything bad it would be\nreally hard you know for them to be\ninfluencing some sort of dangerous\noptimization procedure we believe that a\nlot of the sorts of really dangerous\nthings that models produce require them\nto be implementing some sort of\noptimization procedure given that we\nthink that it's not going to be doing\nthat you know or given that you know we\nhave a training goal we say it's not\ngoing to be doing that given that it\nactually satisfies the training goal and\nis in fact not doing that you know it\nshould be safe\nand then you know if the training\nrationale holds then you know we believe\nthat in fact the resulting model will\nconfirm conform to the training goal and\nso we have a reason to believe that it\nyou know we're actually going to get an\nalgorithm and that algorithm is in fact\ngoing to be a safe one\nokay so in this case you know again the\nreason you know that we're sort of\ngiving is we're like well okay if the\nsimplest parametization the classifies\ncasts from dogs you know uses these\nsorts of human-like heuristics then um\nyou know uh and and we believe that\ngradient descent finds these sorts of\nsimple parameterizations then we think\nyou know we should conform to the\ntraining goal\nokay uh and you know these sorts of\ntraining stories are falsifiable so you\nknow I think one you know thing that's\nmaybe worth pointing out is you know the\ntraining story I was just giving is not\nclearly true so you know uh it's\nactually not clearly the case that\nactually we do learn you know only these\nsorts of human like heuristics when we\ndo cat dog classification I think\nthere's some evidence that um and this\nis from uh adversarial examples uh our\nfeatures not bugs where there's some\nevidence that actually sometimes we\nlearn\num\nsorts of you know these sorts of\nfeatures that are not human-like at all\nthey are these sorts of pixel level\nfeatures that are real features in the\ndata that do really correspond to things\nthat are related to you know whether an\nimage is a cat or a dog\num but they're not the sort the same\nsorts of high level structural features\nthat humans often use when they're\ndistinguishing various different uh you\nknow shapes\num and so in that context we can sort of\nbelieve that well we have you know this\nsort of very basic story you know we\nmight hope that even in the sort of in\nthe case of something very basic and\nstructural you know simple as cat dog\nclassification we can tell a really\nprecise story where we're just like okay\nwe want we're going to get exactly this\ntype of algorithm we have these reasons\nto believe that this training process is\ngoing to produce it um but even then\nit's very tricky because uh there are a\nlot of sort of very a bunch of\nsubtleties to what sort of algorithm we\nmight be learning what sort of features\nmight be paying attention to\num and in this case you know\num we're not super concerned about it we\nhave some understanding of what these\nsorts of pixel level-like features are\nand we don't think that they're you know\ndoing some sort of crazy optimization\nthat is existentially dangerous but it's\nstill worth pointing out that they're\ndoing a you know type of uh you know\nalgorithm which is not the sort of\nalgorithm that we maybe wanted when we\noriginally were like okay we think CNN's\nmay be similar to how human you know uh\nyou know image you know regions of human\nbrain work and so we're going to try to\ntrain something similar to human image\nclassification in fact we often don't\nget that we still get an algorithm that\nactually does do a really good job at\nthe task but it's one that is maybe a\nlittle bit different than the algorithm\nwe were hoping to get and it's hard you\nknow in advance to really be able to\npredict what sort of algorithm I'm going\nto get it's very difficult to understand\nare we going to get these sorts of you\nknow weird pixel level features are we\ngoing to get the sort of human-like\nfeatures in this case we you know we do\nget a combination of the two\num\nbut making these sorts of you know\npredictions getting the evidence\nrequired to be able to know are we going\nto end up with this algorithm or that\nalgorithm is the sort of whole name of\nthe game here\nYeah question\nare these training stories a standard\nmethod used in AI research and if not\nwhy not\nuh I don't think you'll hear many people\nusing these same sorts of terminology I\nreally like them\num I think we'll talk a little bit later\nabout you know some of the sorts of\nother terminology that is often more\ncommon\num oftentimes you'll hear you know inner\nand outer alignment used to describe\nsome of the same sorts of things we're\ngoing to be talking about later we of\ncourse talked about inner and outer\nalignment previously and in slightly\ndifferent context and I'll explain a\nlittle bit later on you know what sort\nof what they sort of look like in this\nmore General context\num but I think that these are useful\nterms I think that if you're trying to\nunderstand you know how to think about\ninter you know how to structure your own\nthoughts in terms of you know if you're\ngiven some system is the system you know\nis this machine learning process going\nto be safe I think this is the right way\nto think about it regardless of what\nterminology you use the right way to\nthink about it is you know have some\nunderstanding of what algorithm you\nthink it's going to produce and why you\nknow what reasons we have to believe\nwhat pieces of evidence we have to\nbelieve that it's going to go there\nokay\nokay\num and then you know again I sort of\nwant to point out that I think these\ntraining stories also are very general\nright so we can use them for cat\nclassification but as we'll see later\nyou know we can also use them for very\ncomplex alignment proposals uh you know\nany situation really where you want to\nrely on some AI system and that system\nis trained via machine learning process\nyou have to have some reason to believe\nthat among the space of all possible\nalgorithms that that machine learning\nprocess could learn you know why did you\nlearn the one that you were actually\ntrying to get\num okay great so this sort of idea is\nanytime you're training machine Learning\nSystem you sort of theoretically sort of\nshould can have a training story in the\nback of your mind you know when I'm\ntraining this thing I'm training on this\ntask what am I trying to get what sort\nof algorithm do I want to be you know uh\nimplementing and why do I at all believe\nthat among all the possible algorithms I\ncould find you know that's the one that\nI'm going to get\nokay\nso uh breaking things down a little bit\nmore so we've sort of been glossing over\nthis but with within each class of the\ntraining goal and the training rationale\nthere's a couple of things that you sort\nof always need to have so in the trading\ngoal we always want to have a\nspecification we want to know you know\nexactly what it is uh you know\nmechanistically the sort of algorithm\nthat we want and we sort of need this\ndesirability we also need to have some\nreason to believe that whatever that\nmechanistic algorithm is we actually\nwould like it you know it's actually\ngoing to do good things if we got a\nthing that actually satisfies our\ntraining goal\num uh you know so these are sort of both\nnecessary\nand then for the rationale we sort of\nagain have two things we have you know\nthese sorts of constraints we have\nthings that we know are true sort of\nhard facts about the structure of our\ntraining process and then we have sort\nof what I'm going to call nudges which\nare the various different sort of\nindividual slight bits of evidence that\npush us in one direction or the other so\nyou know the basic constraints are we\nknow that is in fact going to be some\nalgorithm that fits the training data\nright this is the basic constraint of\nthe structure of the training process\num but then we also have a bunch of sort\nof uh you know things that are not those\nsort of hard constraints things that uh\nyou know give us some piece of evidence\nin One Direction or the other you know\nokay we think maybe this algorithm is\nslightly simpler than this other\nalgorithm\num but it's not this it's not sort of\nHard Evidence it's it's uh and so we're\nsort of gonna you know try to isolate\nthese things\num together so we're sort of thinking\nabout what are the sort of classes of\nmodels that we could find and within\nthose class of models you know why do we\nbelieve we might sort of be pushed in\nOne Direction or the other\nokay\nall right\num I think one sort of important sort of\nthing to sort of touch on is how\nmechanistic do we really sort of need\nthese descriptions to be I think this is\na really tricky question and sort of a\nlot of the complexity of you know what\nI'm talking about right now comes in\nthis question of uh when we're thinking\nabout you know specifying a particular\ntraining goal we're thinking about\nspecifying you know what is it what\nalgorithm is it that I want my model to\nbe implanting how specific do we need to\nbe and I think that one thing that's\nsort of worth pointing out there is that\num\nyeah so if you uh you sort of don't want\nto if you specify too little then you're\nin a situation where um you know you're\nnot actually constraining from a safety\nperspective right if I just specify you\nknow as we talked about the very\nbeginning if the only thing I say is\nwell I want some model that in fact does\na good job on cat dog classification\nthat's not enough facts you know in\ntheory to know that your model is\nactually aligned you need to know\nsomething about its generalization\nBehavior or something about what is\nactually doing right it is in fact doing\nsomething that mechanistically uses the\nsorts of heuristics that we want\num you know is sort of specific enough\nin that case to sort of imply safety but\nyou need something that's specific\nenough to imply safety but at the same\ntime if you specify too much there's\nsort of this question of well why are we\neven doing machine learning in the first\nplace if we already know what algorithm\nwe want the model to implement right so\nyou know if I know exactly what\nalgorithm I want I know exactly how it's\ngoing to work I know you know what the\nstructure is well I can just write that\nalgorithm right the reason that we do\nmachine learning at all is because we\nbelieve that well we don't know how to\nwrite the algorithm but we know how to\nsearch for the outcome right we believe\nthat we understand a space of possible\nalgorithms uh you know and in that space\nwe think that there are desirable\nalgorithms and you know we have some\nreason to believe we can distinguish\nbetween the desirable undesirable\nalgorithms and find one that can want\nthe right thing even if we don't\nourselves know how to write that\nalgorithm directly\nand so you know if we want to continue\ndoing machine learning at all you know\nwe can't throw away you know that basic\nthing that is the reason we do machine\nlearning which is what we don't know how\nto do what what the algorithm that we\nwant is and so we need to have some\ndescription that is on a high enough\nlevel that it actually still allows us\nto do machine learning at all and so\nthis is sort of this sort of tricky um\nuh you know sort of dichotomy that we\nhave to deal with between you know we\nwant to be as specific as possible\nbecause we want to sort of have some\nunderstanding that is clear enough that\nit precludes things that would be unsafe\num but you know if we're too specific\nthen we've gotten into a situation where\nwe've sort of eroded the advantages of\ndoing machine learning at all\nokay so we want sort of as much\nspecificity as possible while still you\nknow being able to uh you know not have\nto describe exactly the algorithm and\nsort of stay competitive with other\napproaches\nokay so how much specificity is enough\nso I here's like you know an example I\nthink of you know what sort of\nspecificity I think do we really want in\na training goal so uh here's an example\nis courage ability right so courage\nability uh what is a sort of cordial\nsystem well uh you know the corrigible\nsystem here you know Paul has a bunch of\nexamples of the things that a cordial\nsystem would do behaviorally right where\nlike figure out whether I built the\nright Ai and correct any mistakes I made\nuh remain informed about the ai's\nbehavior and avoid unpleasant surprises\nand a bunch of these you know basic you\nknow facts about what we want the\nbehavior to be I think this is\ninsufficient so why do I think this is\ninsufficient well uh what's sort of\nHappening Here is we've it's sort of\nvery similar to the thing we were\ntalking about at the beginning where we\njust described the behavior that we want\nfrom the model but we haven't given any\ninformation about what mechanistically\nAlgae the algorithm it is that it might\nbe implanting and so I think this is\nsort of a problem because it puts us in\na situation where if this is the if this\nis the sort of basic Target that we have\nit's very difficult to understand you\nknow what would it actually look like to\nfind an algorithm which has these\nproperties what sort of algorithms do\nhave these properties right these are I\nthink that this is sort of useful as a\nlist of desiderata in terms of what\nproperties we want and to be clear it's\nnot the case that Paul is sort of\nintending this to be a training you know\ngoal here but I think this is useful as\na list of design but it's not sufficient\nas a training goal because it doesn't\ntell us you know given that we know a\nmodel does some of these things in the\ntraining uh you know uh do we have any\nreason to actually believe it's going to\ndo the right thing later so we need\nsomething a little bit more specific\nthan this so here's maybe a better\nexample uh and this is also from Paul\nso here at the you know Paul has a more\nspecific case of what sort of uh model\nyou know he's looking for Yeah question\na little confused about your example\nwithout encourage abilities because it\nsays here that training goals need\nspecification and desirability so\nwhatever we want and why we want it was\nthe training rationality with the\ntraining rationale gives the constraints\nand the nudges it seems to me that\nPaul's description of Courage ability is\na training goal but not a training\nrationale so what essentially is it that\nit's missing from the training goal side\nof things\nyeah so I think that you could sort of\nuse it could be useful as a way to sort\nof have training goal desirability right\nit's a reason that you might like a\ntrading goal is that that training goal\nis courageable but I don't think it is a\ntraining goal itself because it doesn't\ngive us a mechanistic description of\nwhat the model is doing right if I know\nif you say ah the model's corrigible\nthat's a property of the model's\nBehavior right it is this thing that is\nsaying this model will listen to us and\nyou know change based on the things that\nwe say right it'll try to correct you\nknow mistakes that's a that's a property\nof how the model acts but it's not a\nproperty of what the model is doing\nright like what is it the model might be\ndoing that caused it to have that\nbehavior so maybe a more specific thing\nright you know that you might have would\nbe we talked previously about this idea\nof right corrigible alignment you know\nthe Martin Luther models right so you\ncould have some description that's like\nI want a Martin Luther model and then\nthe reason I think a Martin Luther model\nis going to be good is because I think\nit's going to have these behavioral\nproperties of Courage ability right but\nyou need that additional step you need\nthat reason you need that sort of claim\nof what algorithm is actually\nimplementing right if the only thing\nthat you have is ah it is you know going\nI'm going to get a model that\nbehaviorally is trying to act corrigible\nas we sort of talked about a bunch last\ntime that behavioral analysis is just\ninsufficient to be able to know whether\nthe model is safe right we really need\nto have some additional facts about what\nis doing internally to know anything\nabout its generalization because in\ntheory you know if the only things I\nknow are ah it looks like it's doing X\nit could be pseudo-aligned it could be\nyou know deceptively aligned there's all\nthese sorts of various different things\nin the model could be doing that are\nconsistent with any sort of behavior and\nso we want to really be able to you know\nin this case have reasons to believe\nthat among that set of models that are\nconsistent with the behavior why are we\ngetting what sort of particular one do\nwe want right and so this is why I think\nthe sort of basic op we want a cordial\nmodel insufficient here we really want\nsomething more specific than that\num so here's an example of something\nthat's more specific\nso here Paul's describing uh this sort\nof you know we want a uh model that has\nsome model of the world and some way to\ntranslate between that model of the\nworld and natural language and so it\ndoes the sort of direct translation\nbetween the models World model and the\nquestions that we ask it I think this is\nsubstantially more specific and sort of\nmuch more direct about describing\ninternally what we sort of what we want\nthe model to be doing it is saying we\nwant it to have some model of the world\nthat just sort of is sort of a plain\nmodel of the world just describes how\nthe model works and then some way that\ndirectly and truthfully translates uh\nbetween the sort of Concepts in the\nmodels model of the world and the sort\nof questions that we ask it\num that sort of direct translator uh is\nis I think a very nice mechanistic\ndescription of what we might want right\nwe're like okay if we have a model and\nthe sort of algorithms influencing is it\nhas the model of the world and then\ndirectly translates facts from the model\nof the world uh into natural language\nand it tells us those facts then we have\nsome reason to believe that ah this\nisn't just an algorithm that looks like\nit's doing the right thing it is really\nstructurally doing the right thing it is\nactually in fact honestly reporting its\ntrue beliefs about the world and so now\nwe have a reason to believe that's doing\nthe right thing now importantly this is\nstill sort of a very high level\ndescription right it's not the case that\nit specifies exactly how you would learn\na world model and all of the facts that\nwould be involved in that world model\nright we don't know all of those we\ncan't write them down we're doing\nmachine learning to try to learn them\nbut we still want to get a model that\nactually has this basic structural\nproperty that it does in fact have some\nmodel of the world that it truthfully\ntranslates into its into its outputs\nright and so we're like if we believe we\nhave that mechanistic property then we\nsort of think we can be safe and so this\nis the sort of level of specificity that\nI think we need at the very sort of at a\nminimum right\num because if we have this level of\nspecificity about what sort of algorithm\nit is that we're looking for then we can\nstart to reason about why we think that\nyou know among all the possible\nalgorithms is actually going to you know\nbe doing something safe\nokay so this is sort of what I'm\nimagining right we don't want something\nthat is just this high level behavioral\ngloss like courage ability I want\nsomething that is a little bit more\nspecific in terms of describing\nmechanistically what it's doing but not\nso specific that we're describing you\nknow exactly all of the you know\nknowledge that it has\nokay\nokay so that's the sort of basic high\nlevel you know how to think about these\nsorts of training stories\num and now let's we're going to sort of\ntry to get a little bit into the\nnitty-gritty you know in terms of\napplying these and thinking about\nconcretely\num what some you know uh proposals uh\nfor you know how to build safe advance\nthat I might look like and how to\nevaluate them sort of in this framework\nokay so I want to sort of go back to\nsomething I was saying earlier which is\nwell okay what are sort of other\nterminology that you'll hear in this\ncontext uh you know previously we\nintroduced this sort of inner and outer\nalignment dichotomy and and we talked\nabout them uh previously in these sort\nof strict contexts so we said you know a\nsystem is outer aligned uh if it you\nknow has uh you know some loss function\nwhere if the model were optimizing for\nthat loss function we would be happy and\na system is sort of interaligned if um\nit has some mace objective and that mace\nobjective is sort of aligned with it's\nthe same as the sort of loss function\nyou're trying to get it to optimize\num and I think that in this context\nthese definitions are a little bit too\nstrict we sort of want to relax them so\nso what's sort of the problem with these\ndefinitions\num and and we can sort of think about\nthem in a training stories context so uh\nthe sort of strict version of outer\nalignment and the strict version of\ninner alignment are both presupposing a\nparticular training goal they're saying\nsuppose that your training goal was you\nwant a mace Optimizer you want some\nmodel which is implementing an\noptimization procedure and you want that\nmodel to specifically be optimizing for\nthese sort of uh thing that you\nspecified in your loss function in your\nyou know in your reward function\nwhatever the thing that you specified\nyou were trying to get to do when you\nwere training\num if that is your training goal right\nif the thing you want to get is a model\nwhich is actually doing\num you know optimization and it's doing\nit for this thing that you wrote down\nthen these sorts of concepts are exactly\ncorrect they're the you know the things\nthat you want they are you know do we\nbelieve you know the training goal\ndesirability do we believe that if a\nmodel we're actually optimizing for that\nthing that I wrote down it would be good\nand then the you know training goal uh\nthe training rationale you know do I\nactually believe and I'm going to get a\nmace Optimizer that is actually has that\ngoal you know among the possible\nalgorithms that I could find\nbut the thing that is worth pointing out\nis that well\num this is not the only training goal\nthat we can have right so we might not\nwant to build a model which is directly\noptimizing for the uh you know some\nparticular objective we also might want\nto build a model that is optimizing for\nsome particular thing but the thing\nthat's optimizing for is different than\nthe thing that we wrote down it's not\noptimizing for our loss functions\noptimizing for something else\num so you know an example of this might\nbe I want to train a model this sort of\num acts in a similar way to humans and\nthe way that I'm going to do that is I'm\ngoing to reward it for\num you know cooperating in some\nenvironment with other agents and we're\nlike okay well we don't directly want a\nmodel that optimizes for cooperation but\nwe believe that the sorts of models\nwhich do a good job in this sort of\ncooperation environment\num have good properties right the sorts\nof models that you know generally learn\nthings like you know be you know\nCooperative work with other agents you\nknow be nice whatever\num that's a training story that sort of\nis trying to train a model which is\ndoing something different than the\ndirect thing that we're trying to get to\ndo right we have some loss function you\nknow we're directly training on\ncooperation we're trying to get a model\nthat isn't just like optimizing for\ncooperation it's trying to we're trying\nto get a model that's doing something\ndifferent\nand so in that context you can imagine\nif that sort of approach succeeded then\nyou would have a model that is aligned\nin the sense that okay it is doing\nsomething good right it's doing you know\nit's it's nice it's trying to cooperate\nwith other agents it's doing things in a\ngood way but it wouldn't be strictly\nouter lined or Inner Line because it\nwould not be the case that if it were\njust optimizing for this raw cooperation\nobjective we would be happy with it and\nit's also not the case that it is in\nfact directly optimizing for that strict\ncooperation objective right it's in fact\ndoing something different than what we\nsort of directly specified in the loss\nfunction but it's doing something\ndifferent in a way that we want it's\ndoing the sort of correct thing instead\nokay so we need we need slightly broader\nterms right we need to say in the more\nGeneral case where we have any possible\nyou know approach that somebody might\nhave or thinking about how to build a\nsort of safe system we need a way of\nthinking about it that is a little bit\nmore General than just assuming that the\ntraining goal is you know a model which\noptimizes for for loss\nokay so what are the more general\nconcepts how do we how do we sort of use\nthis in a more General case\nso uh here is our sort of training\nstories based evaluation framework\nso we have this sort of General notion\nof outer alignment you know which is\nwhether the training goal is good for\nthe world right if we got a model and it\nsatisfied the training goal would we be\nhappy with it right this is our training\ngoal desirability and in this context we\ncan sort of think about this as a more\nGeneral generalized version of outer\nalignment it isn't saying okay the\nspecific training goal of optimize for\nthe loss function is that good we're\nsaying okay that might not be your\ntraining goal whatever your training\ngoal is you need to have some reason\nthat if you actually got a model that\nhad that property you would be happy\nwith it\num and then we need a sort of notion of\ncompetitiveness we need to believe that\nif we actually did get that training\ngoal\num it'd be powerful enough to compete\nwith other AI systems so in this case\nand this is sort of where it slice it\ngets a little bit different from the\nmore General training stories framework\nin this case we're thinking about you\nknow powerful Advanced AI systems right\nthese are you know how we evaluate\nproposals for building uh systems that\nare you know arbitrarily powerful for\naligning you know the most powerful AI\nsystems for making sure they are sort of\ngood for the world overall\num and if your proposal is something\nlike you know well just do cat dog\nclassification right well well that that\nthat shouldn't be okay right that's not\nthat doesn't actually solve the problem\nthe problem was you know people actually\nwant to do things that are not just cat\ntalk classification you know they want\nto do all sorts of other tasks um and\nthey want to use machine learning for\nthem and so you know we as you know\nsafety researchers right you know need\nto find some way to figure out how to\nlet the people do those things in a safe\nway and so that means you know just do\ncat dog classification isn't an answer\nright we need to have some reason to\nbelieve that whatever approach we have\nhere would act actually be able to\naccomplish the various tasks that people\nwant to use the machine learning system\nfor\nokay so this is this performance\ncompetitiveness question\nand then again we sort of have these\ntraining rationale components so we have\nthis sort of more generalized version of\ninner alignment we're saying uh you know\nis it in fact the case that your\ntraining rationale is correct right is\nit actually going to hold you know I\nhave some reason to believe some piece\nof evidence to believe that I'm going to\nget this particular type of algorithm uh\nis that true right do we actually\nbelieve that I'm going to get that\nalgorithm among all the possible\nalgorithms that I could find right\num you know and again the sort of same\nreason we talked about previously about\nwhy this might not happen you know all\nthe reasons you might get a\npseudo-aligned Model A deceptively line\nmodel these are reasons that your sort\nof training rationale might not hold you\nmight get a different model than the one\nyou intend\nand then we also need another sort of\ncompetitiveness portion here so we need\na sort of implementation kind of address\nwe need some reason to believe that your\ntraining rationale the sort of way\nyou're trying to build your model is\nactually implementable right so in the\nsame way as you know train a cat dog\nclass with higher isn't a solution\num you know another thing that isn't\nalso not a solution is uh you know just\nbuild a you know simulation of billions\nof humans you're like okay if we just\nhave enough humans and we simulate them\nin perfect Fidelity and we get them to\nthink about the problem enough then they\ncan solve it and we're like okay that\nmay be true but we can't build that\nright we have no ability to actually in\nfact you know upload billions of humans\nand get them to think about the problem\nor whatever and so we're like okay this\nis also not a solution you know if we\nwant something which actually sort of\naddresses the general problem with AI\nexistential risk we need some reason to\nbelieve that you know Not only would it\nbe able to sort of solve the problems\nthat people want machine learning for it\nalso needs to be able to you know\npractically do so right it means it's\nnot just in theory be capable of doing\nit it means also in practice be\nsomething we could actually do\nokay so we need these sorts of basic\ncomponents right we need these inner and\nouter alignment you know the generalized\nforms you know reason to believe that\nthe thing we're trying to get is good\nand that we're actually going to get it\nand then a reason to believe that we can\ndo so in a competitive way that it'll\nactually be able to solve the problems\npeople want AI for and that will\nactually you know be able to do so in a\npractical way\nokay\nall right so we're going to now look at\na case study a particular uh actual\nproposal uh for addressing you know the\ngeneral problem of AI essential risk\nthis is maybe this is the first sort of\nconcrete proposal we're going to look at\nuh in in you know this this series of\nhard we'll look at some more later but\nthis is going to be the the one we have\nfor this talk right now\num and it's a particularly interesting\nexample because it's one that is\noftentimes very difficult to sort of\nanalyze under a more conventional\nframework so um but I think it's a\nreally interesting proposal and so I\nthink it's a good place to start so what\nis microscope AI so here's the approach\nso the first thing that we do is we\ntrain some predictive model on some data\nwe want to get our model that is trying\nto sort of understand uh and sort of\npredict some particular distribution\nand then we're going to try to use\ntransparency tools to understand what\nthat model learned about the data\num and extract that understanding\ndirectly and use it to guide human\ndecision making and so we're saying okay\nin the same way you know we could think\nabout you know this sort of Windows on\ntop of Wheels example well we learned\nsomething about the model you know what\nthe model was doing it was looking for\nWindows on top of Wheels but if we\ndidn't know things about cars beforehand\nwe also would have learned something\nabout cars right we would have learned\nthat cars have the property that they\nare detectable by looking for Windows\nand wheels now we happen to have already\nknown that but we might not have already\nknown that right there we have the\nability to actually gain a ton of\ninformation and useful information that\nwe can use for building our own systems\nand you know doing things directly that\ndon't have to go through machine\nlearning by using machine learning right\nthe thing that machine learning does is\nit produces a model which has really\ngood understanding of some data we could\nmaybe directly extract that\nunderstanding and use it ourselves\nYeah question\nI would actually know the\nSE things will be\n[Music]\nwe know something about wheel spin is\nsomewhat in need to detect but if it\nuses some alien concepts with which you\ncan 100 detect what a car is\num how do we know that there is this\nthis sort of information to be gained\nand also\nyeah no perhaps give it that\nyes this is a really great question and\nwe're gonna we're gonna talk about this\nvery soon I think the answer is\nbasically we don't we absolutely do not\nknow that uh and so you know if we're\ntrying to ask you know is this you know\na actual solution you know which of\nthese sorts of criteria does it succeed\non I think the answer is well it\ndefinitely doesn't we definitely don't\nhave a good reason to believe that it\nsatisfies these criteria that we just\ntalked about right\num it's it's not a solution right it may\neventually become a solution but it you\nknow we can um the thing that we're\nhoping to sort of understand by\nanalyzing in this way is what are the\nthings that would be we would need to\nhave to believe that it is a solution\nright to believe that it actually does\nhelp with this problem that this is an\napproach that we could use you know to\nsolve the sorts of problems that people\nwant AI to solve to do so in a way where\nwe actually believe we're going to get\nthe sort of model that we want\num you know what are the sorts of things\nwe would need to know uh to to believe\nthat right and so that's sort of what we\nwant to understand here\num does it actually succeed I think the\nanswer is well we have no reason to\nbelieve really that it would right now\nbut maybe we could eventually get there\nif we have you know really good\ntransparency for example right some\nability to actually believe we could\nextract you know the sorts of Concepts\nin a human relevant way then maybe you\nknow this would become a viable approach\nI think that right now it's not a viable\napproach but I think it's a really\nuseful and instructive approach because\nit's going to help us understand how to\nthink about these sorts of approaches in\ngeneral right\num yeah another question\nhow would this score under\ncompetitiveness like let's say that we\ndo have transparency toable that we can\nuse those that understand heuristics of\nthe model I've learned isn't that still\nsignificantly less effective than just\nrunning the model and letting it tell us\nthings\nyeah so really liking these questions\nbecause they're exactly the sort of\nthing I'm trying to get you to do right\nwhich is take an approach and then try\nto analyze it you know under these\nquestions that we've been asking right\nyou know do they actually have these\nsorts of competitive properties that we\nwant do they actually have these\nalignment properties that we want I\ndon't know I I don't know the answer to\nthat question\num I think that um in this case like I\nwas saying at least it seems like there\nare at least some difficulties uh and\nyou know like you were saying you know\nit seems it might be hard to actually\nget a system which can you know do all\nthe sorts of things that we want I'm\ngoing to talk in a little bit about you\nknow going through those difficulties\nbut but yes I think the thing you're\nsaying is exactly correct which is that\nthere are competitiveness difficulties\nhere\num that seem you know quite difficult\nwe'll talk about you know what they\nmight look like but but yeah absolutely\nYeah question so\nfor example now in chess and Google\nthese AI is very much be test and still\ngo players did improve like after seeing\neven without good interpretability after\nseeing all of a ghost moves they did\ndevelop new strategies and they\ndeveloped but still they are very far\nbehind or they're actually AI do you\nfind it losable that it could\ninterferability we could understand the\nstrategies of the AI is so bad that\nhuman players could just learn that and\nbeat the AI\nuh so I think the chess and go is a\nparticularly interesting example so I\nthink one thing I will say is that well\nokay we don't just have to do it\nexclusively with human brains we can\nstill implement it in our own algorithms\nright so if you look at something like\nstockfish which is the you know a human\nengine that uses substantially less AI\nthan um alphago uh or sorry in in like\nchess for example uh you know uses\nsubstantially less computation than\nAlpha zero\num I believe that like current versions\nof stockfish are are are better than\nAlpha zero was at the time when Alpha\nzero beat stockfish\num and that has done you know that's\npartially through AI but it's also a lot\nthrough understanding some of the things\nthat Alpha zero was doing and trying to\nyou know actually write them in\nalgorithms ourself so there is some\nextent to which you know uh yes I think\nthe thing you're saying is correct it\ndoes seem like there's a large extent to\nwhich you know\num it's it's gonna be really hard to get\nhumans to the level of like you know\nthese sorts of systems but also they do\nteach us things right they have improved\nHuman Performance and they've also\nimproved our ability to write other\noutcomes which do these sorts of things\nso you know could this approach\neventually succeed I think maybe but I\nagree that there's sort of some pretty\nmajor obstacles along these lines\nokay so just you know talking about this\na little bit more Chris Ola who's the\nhead of interpretability and impropic\nalso is is the person who uh you know\ncreated this approach you know I think\nyou know one good way of thinking about\nit is sort of what he talks about here\nso the idea is that um you know when\nyou're doing interpretability these\nsorts of visualizations are a bit like\nlooking through a telescope so just like\na telescope transforms the sky into\nsomething we can see the neural network\ntransforms the data into a more\naccessible form one learns about the\ntelescope by observing how it magnifies\nthe night sky but the really remarkable\nthing is but one learns about the stars\nand so the idea is you know uh\nvisualizing representations teaches us\nabout neural networks but it teaches us\njust as much potentially about the data\nitself so the idea you know this is the\nsort of basic idea behind this approach\nis well you know we are learning\nsomething about the network when we do\ninterpretability maybe we're also\nlearning something about what we were\ntrying to understand\num\nand so you know we can sort of use\nbetter AI systems to improve you know\nhuman decision making\num and then you know use that to sort of\nbuild better uh better systems\nokay so this is an idea it's an\nunapproach that we can sort of think\nabout as a way to okay you know we need\nsome mechanism for being able to build\nsystems which can solve these sorts of\nadvanced tasks and we need some\nmechanism for doing so in a way we\nbelieve it's going to be safe and this\nis a mechanism and so we want to\nunderstand is this is this a good one\nokay\nso what's the training story here so\num the sort of training goal right is a\nmodel that predicts the data given to it\num and importantly you know we want it\nto be sort of uh you know not optimizing\nsomething over the world it's sort of\njust a world model you know just doing\nsome basic prediction tasks and we\nwanted to sort of be doing that in a way\nthis human understandable we want to be\nusing Concepts that once we understand\nand extract those Concepts we'll be able\nto understand what they're doing so we\nwant it to be sort of just understanding\nthe world and doing so using human\nunderstandable Concepts\nand you know the reason we think we're\ngonna get this well we think that you\nknow if you just train on a prediction\ntask on some really really large\nprediction Set uh you know set of data\nyou're going to learn a you know just a\npure predictor that's doing you know\njust prediction why do we think this\num well you know one reason you might\nthink this\num you know it's it's unclear whether\nthis is true but one reason you might\nthink this is you know we were talking\npreviously about this sort of you know\nwhy would you get deception and one of\nthe things we sort of mentioned a lot\nwas if the thing you're trying to train\non the actual objective you're trying to\nget to do is very very simple then the\ncase for why you would get something\nlike deception is much less because uh\nyou know internal alignment is actually\nreally easy to describe it's not like it\ntakes you know this really long and\ndifficult path and so maybe it's more\nlikely you would get that so we're\nthinking well okay if you're trying to\ntrain something that's just doing\nprediction maybe we believe we're\nactually going to get something that\nreally is just doing prediction we have\nsome reason to believe that\num and so we think that you know we can\nget something that's doing the right\nthing there's also separately this\nquestion of why we think we're using\nhuman understandable Concepts uh which\nis a really big and tricky question\num I think maybe one way to think about\nthis uh\nis um\nuh yeah well we'll talk about this in\njust a second actually uh in terms of\nhow why you might actually be getting\nhuman understandable Concepts I think\nthat you know the most basic case is\nthat you might get human understandable\nconcepts for a while and so you know you\ncan use this as you sort of scaling up\nthe power of your systems but eventually\nyou probably will stop getting human\nunderstandable Concepts\num yeah questions\neven now with current\num image mode up image recognition\nmodels it's not that clear or not to be\ndo you guys human understandable because\nI mean sometimes we do like these videos\nand rainbows this was a good example but\nmy impression is that sometimes we don't\nand these are just very simple things\nworking on it's not very under yes this\nis a really good question really good\ngood point because you know I I made the\nexact same point earlier in the talk\nright you were talking about the cat dog\nclassifier it actually often doesn't\nlearn just human understandable Concepts\nin just a second I'm going to put up a\ngraph that is sort of you know gonna\ngonna give a rationale for why you might\nexpect this to work when that doesn't\nwork but I agree that it's a serious\nsort of you know concern uh and problem\nfor this style of approach\nokay um but yeah so let's talk about you\nknow why you might expect this to work\nso in terms of the training goal you\nknow we need this outer alignment so you\nknow whether the training goal is\nactually good for the world right so is\na pure predictor actually safe if you\nhave this pure prediction is it actually\ngoing to be okay you know you're going\nto be happy with it\num and this has a bunch of different\nquestions so one is like are we actually\ngoing to be able to make use of it right\nis the knowledge you know actually you\nknow the sorts of knowledge which helps\nhumans which makes humans actually like\nyou know do good things or does it make\nthem do bad things we also have sort of\ntricky questions about whether\npredictors themselves are safe and this\nis something we're going to be returning\nto in a later talk but we have things\nlike self-fulfilling prophecies so if I\nhave a predictor and that predictor is\ntrying to produce predictions such that\nthose predictions are very likely to\ncome true then you know I can have some\nsystem that is like well if it says the\nstock market goes up then people trade\non it and the stock market goes up and\nif it says the stock market goes down\nthe people trade on that and the stock\nmarket goes down and so any prediction\nthe system makes might be true and so\nyou know we might be a little bit\nconcerned that it's actually going to\nproduce the correct the sort of\nprediction that we want because you know\nany of them are equally valid this sort\nof thing can get a little bit tricky\nagain like I said we're gonna we're\ngonna come back to these sorts of\ndifficulties dealing with predictors\nlater\num but but suffice to say there are some\nsort of issues that you might be\nconcerned about even if your model is\njust doing a prediction task\nokay and then again you know we talked a\nbunch about this we have this\nperformance competitiveness issue right\nyou know is it actually sufficient uh to\naccomplish all the tasks you might want\nwith the AIS just to be able to do this\nyou know information extraction in terms\nof you know extracting the the knowledge\nthat the model has learned\num the answer is maybe I think that my\nbest guess would be it is possible for\nsome tasks and not for others so\nespecially for tasks that involve you\nknow a really large deployment or a\nreally you know Fast Response interval\nso something like a dialogue agent or\nlike a you know Factory agent you know a\nlot of things where you might you need\nan AI to you be doing something directly\nin the world and be doing it a lot of\ndifferent cases it seems very difficult\nto make something you know using this\napproach\num but there might be other situations\nwhere you might you might theoretically\nhave otherwise wanted an AI system that\nthis can sort of cover for so maybe you\nwanted to build AI systems to you know\nas you know managers in a company uh\nwell you know uh maybe this sort of can\ngive you an alternative to that which is\nwell maybe it's actually better to sort\nof use the AI system as a way to extract\nreally useful information about how to\ndo management and then give it to humans\nand so it's certainly a situation where\nyou could imagine a lot of possible use\ncases sort of being solved by this but I\nI think my sense would be it does seem\nlike there's some use cases where you're\ngoing to really want AI systems that\nthis sort of doesn't address and so\nyou're like well if this is our only\nproposal for being able to sort of make\nSafe Systems then you might be very\nconcerned that uh you know there's still\ngoing to be a bunch of possible ways\nthat you can build AIS that you know\nthis sort of doesn't doesn't address\nokay\num and then so for the training\nrationale we have this sort of inner\nalignment question you know um is this\nsort of training rationale I'd like you\nto hold are we actually likely to get uh\nsort of human understandable concepts by\ndefault we actually like to get a\npredictor so I talked about this sort of\nAre We likely to get a predictor\nquestion you know this sort of you know\nriding on the Simplicity of the\npredicting prediction objective\num I promised a graph about uh How\nlikely it is that you would get a human\nunderstandable Concepts so here's that\ngraph so the idea is this is sort of a\nhypothesis it's unclear whether this is\ntrue but the idea would be well maybe uh\nas we sort of increase the strength of\nour models the way in which we get the\nsort of Concepts those models learn\nchange like this we think that you know\nas we have really simple models maybe\nthey're really hard to understand uh\nbecause they're doing something you know\nso uh or I guess sorry as we have really\nsimple models they're very easy to\nunderstand we have something like you\nknow um linear regression it's very easy\nyou just look at the you know the\nweights it's got a slope and an\nintercept and then we have you know more\nslightly more complex models and they\nget substantially more uh you know\nconfusing because they learn these\nreally weird Concepts but then you get\nmore powerful models and then they start\nto learn these really simple\nabstractions you know these really basic\nstructurally simple Concepts those sorts\nof Concepts humans use\nbut then you know as you keep pushing\nfurther you get you get reader concept\nConcepts that are like better than the\nones the humans use and ones that you\nknow maybe we don't exactly understand\nand so it starts to get worse this is a\nhypothesis um I think that we have maybe\nsome data to support this you know we\nhave looked at you know how hard is it\nto interpret models over time\num I think that we have seen this to\nsome extent so there are things where\nlike you know very very simple models\nare very easy to interpret and then you\nhave you know more complex early so no\nvision models stuff like uh you know\nalexnet that are often very hard to\ninterpret and then with later Vision\nmodels they often get easier so\nespecially you know once they have stuff\nlike residual connections\num they start to become you know use\nConcepts that are easier to understand\nso there's maybe some evidence to\nbelieve this but\num you know we have no reason to believe\nthat we would get you know necessarily\nthis exact sort of shape around here so\nyou know maybe this would work\num but it's unclear but this would be\nthe sort of thing that you would be\nrelying on if you were if you were sort\nof is this you know going to work you\nknow we actually think you know we're\ngoing to sort of rely on this proposal\nwe we need to sort of be able to rely on\nyou know something like this graph\nholding\nokay yeah question\nI may have missed this but has this been\nobserved to happen up to like a certain\npoint already like how we started to see\nthis curve go down and then up again\nuh yeah so I mean I think we have seen\nthis at least with early Vision models\nso like really really simple Vision\nmodels you know where it's just like\ndoing you know hard-coded Edge detection\nyou know we understand it you know early\ncnns are very hard to understand but\nlater CNN's are often easier\num so so\num at least with simple Vision models we\nhave I think we do see this sort of this\nthis you here but even that is a little\nbit subjective you know it's just based\non well you know people have looked at\nthese and you know how easy is it\ngenerally to find Concepts in them but I\nthink that we sort of see something like\nthis you happening for\num simple Vision models though it's it's\nvery unclear question\nthis diagram looks pretty similar to\ndeep double descent for lecture one so\nis there any connection to that\nI think that uh\nyeah I think that it's not doing that I\nalso I I would say that in some sense\nthis is sort of inverted of what the\ndouble descent graph looks like because\nuh like with the double descent graph\nthe idea is that performance sort of\ngoes goes down here and then goes goes\nup whereas here we sort of you know uh I\nguess if we're thinking about\ninterpretability as performance\num though of course we're imagining that\nthe reason it learns these alien\nabstractions is because they improve\nperformance they're better than the\nhuman abstractions and so\num\nyeah I guess you could maybe think of\nlike this point here where it learns\nthese really confused difficult to\nunderstand abstractions is like the\nworst part of the double descent you\nknow you start with these sort of you\nknow relatively you know you get simple\nthings then you go up and you get these\nsort of dumb memorized things and then\nover time you get you know simpler and\nsimpler things and some of those early\nsimple things maybe you're human-like\nbut then eventually you get these sort\nof simple things that are non-human like\nso if you wanted to sort of superimpose\nit it would the sort of you would look\nyou know more like uh with the the hump\nof the second descent there probably but\nbut I don't think there's necessarily a\nclear relationship but in terms of you\nknow trying to to analogize and\nunderstand yeah another question\nso I'm not sure if this is strict word\nanswer but um what's the force that's\npushing the model to go towards\nincreasing alien abstractions if you're\ntrying to predict data that's been\ngenerated by humans\num and black humans have human\nabstractions it seems like that's the\nuseful level of abstraction to reason\nabout predicting the data in since\nthat's how it was generated so is there\nreally a reason to push beyond that when\nlike once you get there you probably\nhave a good ability to start predicting\nthat data\nplausible I think the thing you're\nsaying is absolutely plausible\num but I think there is an alternative\ncase the alternative case would be well\nyou know a lot of the data is just the\nworld right it is just like how do\nthings in fact function in the world how\ndoes the world work what sorts of things\nhappen in it and how do those things\nrelate to each other we as humans\nunderstand some of those things right\nyou know we have the ability to\nunderstand and predict various things\nthat happen in the world to some degree\nbut we're not like experts at it right\nthere's all sorts of things about the\nworld and Concepts and relationships in\nthe world that we don't fully understand\nthat we don't have great concepts for\nworking with and so you can imagine that\nyou know there's room for improvement\nright there's no reason to believe that\njust in terms of basic concepts for\nunderstanding what happens in the world\nthe humans are you know at the absolute\nForefront and so you could you know\nimagine getting substantially better\nConcepts than the ones the humans have\neven in just the sorts of understanding\nhumans right you know if you're just\nthinking about good concepts for\npsychology right for understanding human\nbehavior there's no reason to believe\nthat human concepts for understanding\nhuman behavior are at the limit right\nyou know there could be better concepts\nfor understanding how to think about you\nknow even how humans work right you know\nthere's a whole field you know of\npsychology right that tries to produce\nbetter concepts for understanding how\npeople humans work there's no reason to\nbelieve that you know we are at the\nlimit of psychology or something\nYeah question\nuh to go back to the chess Excel thought\nwe have a neural network what says uh\nwould you think that the current systems\nare already in the increasing me and\ninfection domain or which is say that we\ntry interpretability of those we will\nget Chris abstraction with the arrival\ndid yeah that's a really good question I\nthink the answer is maybe there's some\ninterpretability work that has been done\non like alphago you know where\num some uh you know some some very good\nyou know chass and go players have tried\nto look at some of the things that the\nmodel you know is paying attention to\nand understand what it's doing and\nsometimes we understand and sometimes we\ndon't\num you know it often you know there was\nsome work that found that a deep mine\npaper they found there was like a large\namount of correlates between you know\nthe sorts of things the Stockbridge pays\nattention to and the sorts of things\nthat you can sort of probe out of alpha\nzero\num but but even then you know it's not\nperfect there's a lot of things that\nit's clearly paying attention to they're\ndifferent than the things that we know\nhow to pay attention to so it's unclear\nI think the answer is you know maybe\nokay\nand then yeah so implementation\ncompetitiveness how hard is this\nactually influence\num there's some you know there are some\nquestions you know how hard is it to\nactually build uh you know a system\nwhich is doing\num\nuh you know prediction you know\nhopefully this is not that hard because\nit's like one of the most basic things\nthat we often do in machine learning um\nbut there's also a question which is how\nhard is it to actually use the\ntransparency tools that might be\nextremely difficult right it may be the\ncase that actually even if we can\nextract the concepts it's very\ncomputationally intensive or you know\nhuman labor intensive to actually be\nable to understand and interpret in\nwhich case that could you know be\nanother thing that's a real uh you know\na very serious problem here\nokay so we have some understanding now\nof what it sort of looks like you know\nto to sort of understand this this\nproposal we have these ideas you know\nhow to think about you know what the\ncompetitiveness is how to think about\nyou know the alignment overall you know\nI think that there's there's some\nreasons to like this approach and some\nreason it's not like it right places\nwhere it might be helpful places where\nit might not be helpful the idea of\ndoing this sort of analysis to\nunderstand you know when can we trust it\nwhen can we believe that it's going to\nsolve various problems that we have\num uh you know so that we can we can\nfigure out how to make use of these\napproaches right so we're not just ah\nyou know here's an interesting approach\nwe have some understanding of how it\nactually fits in to you know the broader\npicture of you know how we can use\nvarious different approaches to sort of\nyou know eventually you know uh you know\nas Humanity have ways to be able to\naddress all the problems that we need to\nbe able to address\nokay so uh Now sort of for the sort of\nthe last bit of this talk I want to sort\nof take a step back and talk a little\nbit more generally about you know what\nare the various different sorts of\ntraining stories that we might tell in\ngeneral where do we get evidence right\nwhat are the sorts of training goals we\nmight imagine uh and what sorts of\npieces of evidence might we find uh you\nknow and ways in which we can have to\ngenerate evidence to believe that you\nknow some particular you know training\nrationale would hold\nokay so what are some training goals so\nyou know one example of course is the\nexample that we talked about previously\nin this sort of strict inner outer\nalignment sense right which is you know\nmaybe the training goal we want is we\nwant a model that is just directly\noptimizing for that you know loss or a\nword function that we specified we want\na model that's just directly doing the\nthing that we wrote down this is one\nthing we might want right it is\nabsolutely a thing that you might be\ntrying to get in various different\ncircumstances right now it's not always\nthe thing that we want right so in\nmicroscope AI it's definitely not the\nthing we want right we're not trying to\nget a model that is like minimizing its\nyou know its accuracy on on something\nwe're just trying to get a model which\nunderstands the world right uh in some\nvery you know General sense and so it's\nnot always the thing that we want but it\nsometimes is right you know sometimes\nsometimes maybe that is the sort of goal\nyou're trying to get you know you\nactually believe if you have written\ndown an objective that captures\neverything that you care about and you\nwant a model that just is trying to\noptimize for that objective\num so you know the idea would be that\nyou know in this case we sort of have a\nmodel that is uh you know instead of\noptimizing for some direct thing it's\nsort of just optimizing for the reward\nnow the problem with this training goal\nright is that it's sort of a little bit\ndangerous right because we have a\nsituation where uh you know we sort of\nget reward hacking potentially which is\nthis idea of well you know it's very\nvery difficult to have a reward function\nright or a loss function that actually\nfully specifies all the things that you\ncare about and so there may be\nsituations where the model can sort of\nget something that satisfies the\ntechnicalities of the loss but doesn't\nactually do the thing we really\nintending it to do\num you know classic examples of this are\nsituations where you know there's this\nsort of classic boat uh game where\nopening is trying to get the model to\nyou know uh you know get a boat to sort\nof go around in circles uh and it\ndiscovers that you know it is actually\ntrained on with sort of getting these\ngold coins in the in the race that are\nsort of spaced around and it finds they\ncan just sort of run in circles hitting\nup against the wall to get this one\nrespawning coin over and over again and\nso the idea is while this was\ntechnically a thing that satisfied the\nlaw loss function but it wasn't really\nthe thing that we intended and so this\nis sort of a thing that you can be\nreally concerned about with this type of\ntraining goal so we really want\nalternatives\nokay so here's another thing that we\nmight want maybe we want an agent that\nis sort of myopic so we talked about in\nthe case of deception you know one thing\nthat we're really concerned about is\nAgents they're sort of optimizing for\nthings over a long time Horizon right\nthey care about something in the world\nthey want to get something in the world\nover the long term and that's sort of\nthe reason that they try to deceive us\nso maybe the thing you want is you want\nan agent which isn't trying to optimize\nanything in the long term it just has\nsome sort of short-term goal\num this is like another thing that you\nmight want this is another sort of type\nof training goal that you could have\num but there are problems here too so\nyou know issues that sort of arise in\nyou know trying to make these sorts of\ntraining goals desirable\num I'll relate to these sorts of issues\nwe were talking about about\nself-fulfilling prophecies and sort of\nissues we're going to talk about later\nabout you know in case we have these you\nknow agents that are just you know\ntrying to optimize one thing in the next\ntime step\num there are cases where that can sort\nof break where that's sort of a very\nbrittle abstraction so you know a simple\nexample might be\num if I am just sort of just optimizing\nfor my one time step but I know that I\nhave the ability to sort of cooperate or\nnot with a bunch of other agents that\nare also optimizing for their only for\ntheir one time step then we can all sort\nof agree well if we do this thing then\nit'll be best for all of us overall\num and so they can all sort of cooperate\nand coordinate on doing one individual\nthing that is best for all of them and\nthat can in practice look like\noptimizing over a very long time Horizon\nbecause they're sort of coordinating\ntogether over time even though each one\nindividually only cares about one sort\nof time step so so you can have examples\nlike this where it sort of gets very\ntricky but you know this is another\nthing you might desire right we want\nsomething which is really just\noptimizing over the short term\nokay you know another thing we might\nwant is a sort of truthful question\nanswer so this is you know similar to\nthe thing we were talking about uh you\nknow at the beginning right you know\nPaul's example where we have you know a\nmodel and it just sort of truthfully\ntranslates between its model of the\nworld and the questions that we ask it's\nthis sort of direct truthful translator\nyou know honest question answer you know\nthere are various ways which we might\ntry to get this you know uh things like\ndebate the applications we'll talk about\nlater\num but you know something which is just\nsort of directly trying to answer you\nknow questions truthfully\nokay uh maybe we just want you know one\nthing we might want is we just wanted to\nlearn human values right you know it is\ntotally you know on a possible training\ngoal you know is the sort of most fully\nambitious training goal right we're just\nlike we want a thing that is just\ndirectly maximizing for you know the\nthings that humans care about right this\nis maybe you know the most original most\nbasic thing that oftentimes people who\nthought about AGI have sort of wanted\nbut you know it's only one of many\npossible things that we might be trying\nto get in various different\ncircumstances it is one thing that we\nmight try to train a system for it's\nworth pointing out it's maybe probably\nthe hardest thing of all of these right\nit's a really complex goal it's one\nthat's very difficult to specify right\nso it's one that can be can be very\ndifficult to get but it might be your\ngoal right in some cases eventually\nmaybe we might want to build these sorts\nof systems though I would say we\nprobably don't want to build these sorts\nof systems uh in the short term at least\nokay uh you know we talked about\ncorrigibility right so I was saying\ncourage ability you know in this sort of\nbasic behavioral sense is not you know\ngood enough as a uh adjective but there\nmight be other senses right so we talked\nabout the corrigible in line models\nright like the Martin Luther's might be\nanother sort of thing that you know you\nmight be trying to get Yeah question\nsorry I just realized that I don't\nreally understand how those minimizing\nmodels would fit into this list like\nwhy why would that be safe like what\nwould be the loss function for which it\nwould be safe isn't it before the\naligned already\nuh yeah I mean so I guess that there are\nsome cases where it might not be the\ncase that the loss function specifies\neverything about human values but it\nstill specifies enough that you're sort\nof okay that it's not going to like\ndestroy the world right so maybe your\nloss function specifies something about\nimpact regularization so it specifies\nyou know I want you to uh you know just\nfetch the coffee without doing anything\nelse crazy in the world right and maybe\nif you actually believe you've really\nnailed down what it means to not do\nanything else crazy in the world than a\nmodel that is really just optimizing for\nthat would be okay right and so there\nare cases where you could have a loss\nfunction right and a model that was just\ndirectly optimizing for that and you'd\nbe okay with that even though that loss\nfunction wouldn't like you know imply\nthat it's doing the right thing in all\nsituations it at least wouldn't be doing\na sort of sufficiently wrong thing that\nyou would be really concerned about that\nyou know they'll be bad for the world\nokay\num and so great so courage ability so\nyou know we don't want this sort of\nBehavioral description but maybe we\ncould get something more mechanistic\nlike this sort of Martin Luther models\nright\num okay uh you know we've talked about\nthis sort of predictive models you know\nmodels like in the microscope AI case we\nwere just trying to train a sort of\npredictive uh model that is just trying\nto do some uh you know sort of trying to\nunderstand the world it has some World\nmodels to try and do some prediction\ntask we're going to return to this\nspecific case more later uh because\nthere's a lot sort of more to talk about\nin terms of what it might look like in\nthis sort of specific case I think this\nis often what a lot of people want with\nlarge language models\num and so this is sort of a pretty\ncommon training goal and I think one has\na bunch of really interesting properties\nuh it's another thing that you might\nwant people often refer to these as sort\nof generative models or simulators the\nidea is you know a model this really\njust has some World model is trying to\ndo a basic prediction task based on that\nokay\num oh an important thing to think about\nuh is the sort of difference between the\npredictive models and the truthful\nquestion answers so the truthful\nquestion answer also has a world model\nbut it answers questions truthfully so\nif you ask it you know\num some question about the world it will\ntell you exactly what is true about the\nworld whereas the predictive model will\nonly tell you you know what it would\nwhat you would observe right so the\npredictive model is maybe predicting you\nknow what would show up on the internet\nin some particular case you know what's\nwhat is most likely what sort of tokens\nare most likely to occur on the internet\nwhat sort of tokens are most likely to\noccur uh in some particular situation\nand that's not necessarily truthful so\nthe predictive model is not always\ntruthful whereas the the truthful\nquestion answer is always truthful Yeah\nquestion\nso does that mean like theoretically you\ncould train either one of those agents\nyou would always want to train the\ntruthful question answer and there's no\nAdvantage a particular model has\na gist of the predictive model is easier\nto train with our current method\nyes I think that's basically right so\nyou if you if you could theoretically\ntrain either one it would be equally\neasy you would want to train the\ntruthful thing but you know maybe you\nthink that it's too hard to train the\ntruthful thing but it's easier and\nsufficient to just train a thing which\nis making you know more general just\nlike predictions\nokay\num\nokay so you know another possible thing\nis sort of narrow Asians so you know\nmaybe you want an agent to sort of\nrelate to the impact or utilization\nthing they were talking about previously\nthat you know just solve some particular\nnarrow task\num but doesn't do you know some very\ngeneral uh you know saw you know\noptimized for human values in some very\ngeneral cases just you know trying to\nyou know get the coffee and you're\ntrying to train in a case where it's not\ngoing to be doing any other things right\num this is like you know another sort of\nthing you know gold you might want you\nknow another training goal is the sort\nof you know concept I think that exactly\nhow this thing would work\nmechanistically that was a little bit\npoorly understood right what actually\nwould it look like for an agent to be\ndoing something like this there are some\nyou know possible hypotheses or\nsomething like quantization would be an\noption where the idea is you know it has\nsome objective but then rather than\noptimizing for that objective it only\nsort of uh takes the top 10 of outcomes\nthen uniformly randomly picks from uh\nfrom from so maybe that is a lot safer\nbecause it's not sort of directly\noptimizing it's just sort of doing some\nnarrow task and only you know so well\nokay uh and I I definitely you know one\nthing I really want to point out is this\nis not an exhaustive list so the idea of\nthis sort of thinking about you know AI\nin this way uh is not to sort of be like\nokay these are the possible options you\nknow pick one the idea is to sort of\nreally thinking about okay what are all\nof the different possible things that\nyou could have in machine learning you\nknow process uh you know try to find\nwhat are the sort of algorithms we might\nwant uh you know so that we can have you\nknow you know a large possible space of\nyou know what are the things we might\nmight be trying to get so we can think\nabout which things do we want in\ndifferent circumstances which things are\ngoing to be safest which things can be\neasiest to get uh etc etc\nokay okay so we might also want to do\nthe same thing with rationales so you\nknow we've talked about you know this is\nyou know some you know these are sort of\npossible goals you might have we also\nwant to understand what are the possible\nways of gathering evidence about these\nuh you know particular types of training\ngoals what things might convince us that\nyou would actually have a machine\nlearning process which would get a model\nwhich is well described in that way\num so what might that be so you know one\nthing is inductive bias analysis so\nwe've talked this is sort of the thing\nthat we spent a bunch of time doing in\nthe previous lecture you know really\ntrying to understand okay in various\ndifferent versions of the inductive\nbiases of you know the the you know\nmachine learning system and how we can\nthink about them How likely is deceptive\nalignment right and this gives us some\ninformation right it tells us some\nthings that you know we can predict\nAbout You Know How likely is you know\nsomething like deceptive alignment in\nvarious circumstances but it's very\ndifficult piece of information to work\nwith because it's really uncertain you\nknow we don't know that much about the\ninductive biases right now\num and so it can be hard to sort of\nreally reason about this rigorously\nthere are some examples of sort of doing\nthis a little bit more rigorously so I\nhave one over here that is a little bit\nmore rigorous example of inductive bias\nanalysis but in a little being a little\nbit more rigorous it's also much more\nnarrow it's sort of focusing on a much\nsort of uh you know narrower uh task\num but but you can try to do this you\ncan try to sort of work through these\nthings in theory and try to understand\nyou know based on various different\nversions of the inductive biases how\nwould things go this is also using a\nmuch more sort of a very theoretical\nsort of notion inductor biases that is\nnot maybe very you know well grounded\nand so you know there's there's often\nhere a lot of tension between Type you\nknow inductive is analysis that we can\ndo in theory and actual practical\ninductive biases that would exist in\npractice and so you know we saw a lot of\nthis tension previously where you know\nin the last talk we had to use these\nentirely two different Notions Dr biases\nto even have any reason to believe that\nyou know we're going to get some\nconversion result at the end\nokay so that's one way that we can do it\nand one advantage of the inductive bias\nanalysis of course is that it you know\nit's something that we can do sort of in\ntheory even before we built the system\num transparency durbability is sort of\nthe opposite this is something that we\ncan do sort of often it can give us a\nlot of information you know if we just\nlook at the model and we see ah it's\ndoing Windows type of Wheels we can get\na ton of information about what it's\ndoing but it's something that is that is\nsort of very difficult to do in advance\nright so we can maybe do some\ntransparation durability on early models\nwe can maybe see you know how it's being\ntrained to do transparent\ninterpretability along the way sort of\nsee how it's developing but this is you\nknow it's very difficult to get\ninformation about the model uh in\nadvance by doing transparency because we\noften have to sort of build the model\nand then figure out what it's doing and\nthat can oftentimes be sufficient you\nknow it can be okay to just build the\nmodel and then look at and see if it's\nokay\num but it also can be a little bit\ntricky if you sort of think that just\nbuilding the model might itself be\ndangerous which at least for\nsufficiently powerful models is\npotentially a possibility\num but I I another thing I would say\nhere is I also think this is maybe the\nstrongest piece of evidence on this list\nright uh you know what is the thing that\nwhen most convince you that the model is\ndefinitely doing the thing that you\nwanted well we looked at it we\nunderstood exactly the internal\nstructure and it had this internal\nstructure right it's doing exactly this\nthing and so maybe you know I think in\nsome sense this is maybe one of the\nstrongest piece of evidence if you can\nget it though it's very difficult to get\nbecause you have to actually do this\ntransparency and understand the concepts\nthat it's using which can often be\nreally tricky\num okay so you know another thing maybe\nis the sort of game theoretic analysis\nso if you're trying to think about\nsomething like the you know train for\ncooperation right that we were talking\nabout previously well why would you\nbelieve the train for cooperation would\nwork well you know the training goal\nthere right wouldn't be you know just\ndirectly optimized for cooperation the\nidea would be well maybe the sort of\nequilibrium of you know various agents\nplaying in this environment is the sort\nof equilibrium that we want so you know\nand sort of example of this uh you know\napproach is sort of thinking about well\nmaybe you know you know humans in fact\nlearn some particular values and we\nlearn those values you know based on\nthem being you know good in some\nancestral environments so you're like\nokay you know maybe we can have some\nenvironment where the equilibrium\nsolution you know the agents that you\nwould be most likely to learn would be\nthe sorts of agents that um you know\nwould have the sorts of values that we\nwant so you could try to you know get\nsome evidence by this you know doing\nsome sort of theoretical analysis or or\nempirical analysis of you know in some\nenvironment what are the sorts of agents\nthat are likely to do well in the\nenvironment uh you know they're likely\nto sort of cooperate and coordinate with\nother agents that are likely to sort of\nsurvive thrive uh you know past multiple\nrounds of natural section etc etc or in\nthis case artificial selection\nokay\num another thing that's maybe worth\npointing out here is\num capability limitations so you can\nhave situations where uh you know the\nreason that you believe your model is\nyou know going to be you know\nimplementing some particular thing is\njust because you didn't give it a bunch\nof information or you just didn't give\nit the ability to implement some other\nthing right so if you think about\nsomething like the cat dog\nclassification example maybe the reason\nthat you believe it's safe is just that\nwell it's such a simple task that it\ncan't possibly learn you know these\nreally complex things that would be\nnecessary for it to do something\ndangerous so this is like another piece\nof evidence you know important piece of\nevidence that you can use here you know\none example of this is you know trying\nto sort of maybe restrict modeling\nhumans because maybe you believe that\nbeing able to understand humans is\nreally important for being able to\ndeceive humans and so maybe you just\ndon't try to give it understanding about\nhumans then you could you know be more\nconfident in it of course that's hard\nbecause we often want to be able to\nunderstand humans so this can get very\ntricky but it's another you know piece\nof evidence that you can use\nokay uh you know having oversight right\nso another piece another way you can get\nevidence is well throughout the entire\nprocess of building the model I've been\nable to look at it and understand what\nit's doing\num we're going to talk later on about\nyou know what these sorts of oversight\nprocesses might look like amplification\ndebate\num but the idea is well okay if I have\nsome way of sort of continuously\noverseeing the model's Behavior then\nthat gives me some information to sort\nof predict well okay you know I have\nsome reason to believe that it's going\nto be doing the right thing because I've\nbeen looking at it as I've been training\nit\num another thing that I sort of uh like\nto talk about the sort of AI cognitive\nscience idea so you know maybe just by\nlooking at the model's Behavior we can\nbuild hypotheses about what sort of\nmechanistic you know things might be\ndoing internally and then put the model\nin various different situations to test\nthose hypotheses and then extrapolate\nforward so you know if we're thinking\nabout okay we have two different\ntraining regimes we want to understand\nHow likely is that training regime to\nyield a model that is you know\noptimizing something over the long term\nor not we can build you know hypotheses\nabout you know what would it look like\nwhat sort of consequences would happen\nfrom a model that's doing that versus\nnot doing that and then we can you know\nfigure out okay does this training\nprocedure yield that sort of model or\nnot and then we can predict okay if we\nkeep doing with this training procedure\nit's likely to yield you know things\nwhich are consequentialist and in this\ntraining procedure it's not and so we\ncan sort of use that as a way to predict\nyou know what's sorts of behaviors are\nwe going to get later on you know one\nthing that's very tricky with this sort\nof approach is that it's very behavioral\nright we're just focusing on you know\nokay making inferences based on things\nthat we observe about the model's\nBehavior which of course you know can\ncan get us into problems with with\ndeception like we were talking about\npreviously right where you can't always\nbe fully confident meant that the model\nis not deceptive just by looking at its\nBehavior because it could be trying to\ntrick you\nokay so one thing that's sort of very\nrelated is the sort of precursor\nchecking which is well instead of you\nknow if we if we think that behavioral\nchecking is always sort of going to be\npotentially running into issues well\nmaybe instead of looking for something\nlike deceptive alignment directly since\nif you look for deceptive alignment\ndirectly then the model can always sort\nof be trying to trick you you look for\nsome precursors things that are\nnecessary for deceptive alignment to\narise but not deceptive lineman itself\nright so is the model you know thinking\nabout something you know uh thinking\nabout the training process is it\noptimizing a long-term goal right you're\nlike okay what if we just look for those\nsort of precursors then maybe you can\nhave more confidence right even only\ndoing behavioral analysis\num another important sort of component\nof something like this uh\num so yeah another thing you could do\nwould be like lost landscape analysis so\nthis is sort of related to the inductive\nbias analysis but maybe sort of on the\nmore High path dependent side so with\nthe inductive bias analysis we were\nthinking about you know something more\nlow path dependence where we're just\nsort of thinking about you know\nSimplicity and you know and speed but\nyou can also think about things directly\nby looking at the you know the basins\ntrying to understand How likely would\nvarious different you know paths through\nmodel space p\num so it's worth bringing out that\nthere's you know both low and high path\ndependence versions of this\nand then finally another thing is sort\nof this is related to the precursor\nchecking idea is you could do sort of\nscaling laws where you try to understand\nyou know as I you know vary various\nproperties about my models how does it\nchange various alignment properties\nright and I can use that to sort of\npredict you know as if I'm training in\nthis particular way I generally get\nmodels that have you know long-term\ngoals and I train in this way I\ngenerally get models that don't and I\nsee the long-term goals going up here\nthen you can predict that mail that it's\ngoing to continue going up and you know\nyou're going to eventually end up\npotentially sort of in in the regime\nwhere you can get some long deceptions\nyou can use these sorts of ability to\nunderstand earlier models to you know\ngive some information about what future\nmodels might do\nokay uh and again you know this isn't uh\nyou know an exhaustive list there's lots\nand lots of other ways that you can get\npieces of evidence and information uh\nabout you know what sort of algorithm\nyour model is going to be learning but\nthe basic idea is this is the this is\nthe sort of business we want to be in we\nwant to be you know uh trying to gather\ninformation this way to help us\ndistinguish between possible algorithms\nthe model could be could be wrong\nokay and then one very final thing that\nI want to talk about is you know once\nyou have this training story you have\nsome you know basic understanding of\nwhat mechanistic algorithm you want to\nbe doing you know why you believe it's\ngoing to be doing that you also sort of\nyou know want to then you know really\nhave some ability to understand okay you\nknow how robust is this understanding\nright we have some reason to believe\nthat we think maybe this training\nprocess is going to go through uh you\nknow this training story is is going to\nbe correct\nbut then you also sort of really want to\nbe able to put in a bunch of analysis\nand be able to get you know\nprobabilities out for people to\nunderstand okay you know do we actually\ntrust this right because there's a\nreally big difference between okay this\ntraining story sort of makes sense I\nmaybe understand you know why I would go\nthrough and like we're super confident\nyou know we're willing to you know stake\nthe world on you know this being true we\nhave like very good you know rigorous\nanalysis right and so we sort of want to\nbe able to understand okay how can we\nget to that point right like what are\nthe sorts of things that you could do to\nreally test and you know probe at your\ntraining story so the idea is this sort\nof sensitivity analysis so you know you\ncan analyze how robust your training\nstory is uh by looking at sort of how\nsensitive it is right so you can be you\nknow uh we can look at other you know\nsimilar smaller models you know see how\nthey fail you know extrapolate those\nfailures right if we have a bunch of\nother training stories you know and\nwe've seen how well those training\nstories do in general we can have an\nunderstanding of How likely is any given\ntraining story to succeed right so we\nthink about something like the cat dog\nclassification example we can be like\nokay you know maybe a priori we expected\nit to do this particular thing we\nexpected it to do uh you know human you\nknow Concepts and then we found out that\nit didn't right and so we're like okay\nin general when we build training\nstories how often do those training\nstories actually match on to what we\nfind and so you know how often are we\nwrong right and so we can use that as\nsome understanding of okay as we start\nbuilding more and more complex training\nstories they're more and more difficult\nfor us to really predict in advance you\nknow how often are we actually able to\nget them right\num you know we can also just sort of\ncharacterize the space of possible\nmodels right so you know we did this you\nknow previously we're like okay you know\nsome of the possible options you could\nget are things like you know the Martin\nLuther models uh you know there's\ndefinitely align models you know the uh\nyou know the internal line models all\nthese different types of of models and\nthat gives us some understanding of at\nleast what the options are so we can\nunderstand okay how bad were the\npossible failure modes be can we at\nleast rule out you know some of the\nworst possible failure modes\num and you know we can also sort of look\nat uh you know even direct sort of\nperturbations we can be like okay once\nwe have training stories which are as\nmechanistic really mechanistic and\nreally clear uh you know okay we think\nwe're gonna get exactly the sorts of\nalgorithms we can start to understand\nyou know what happens if individual\nParts on that path go differently right\nso we were talking about you know in the\nhigh path dependence case where we're\nthinking about you know this is like the\npath to the query line the Martin Luther\nmodel you know first you you know you\nlearn these sort of world modeling and\num you know proxy objective\nsimultaneously but then you know uh and\nthen you sort of eventually you know\ngets replaced with a pointer but the\ndeceptively aligned question you know we\ncan sort of ask well what if there was a\nsmall perturbation right what if instead\nyou learned you know um you know first\nyou learned you know how to understand\nthe training process and you use that\nright and so we can sort of think about\nthese paths and think about well okay\nwhat if you know various individual\nthings in our understanding of how the\nsort of model is going to you know be uh\nbe trained we're slightly different we\ncan sort of you know reason from these\nperturbations to maybe sort of start to\nhave some more confidence that even if\nour model was slightly wrong things\nwould sort of still go the way we want\nokay this is again not an exhaustive\nlist but the idea is you know some ways\nto sort of you know once we have these\ntraining stories we don't want to just\nbe done with like okay here's a maybe\nplausible case for why we think some\nmachine learning training process would\nyield a particular algorithm we really\nwant to be as robust and as confident as\npossible that this training process is\nactually going to yield the algorithm\nthat we want\nokay so that's the end of the talk uh\nhopefully this was a little bit helpful\nfor starting to get into understanding\nyou know evaluation right starting to\nget an understanding okay we have some\nidea of the problems you know how do we\nactually uh figure out how to evaluate\nvarious different proposals\n[Applause]\nforeign\nokay any last questions\nyes\num you've said World models a couple\ntimes in this talk on the last talk um\ncould you tell us exactly what you mean\nby World model\noh that's a tricky question yeah I mean\nI think that\num and this is maybe something also I\nwanna we'll sort of touch on a little\nbit when we talk about prediction but\nit's It's Tricky I mean I think that the\nbasic idea is we're like okay it's some\nunderstanding of the world that is not\nnecessarily sort of that is just\nseparate from how you act on that\nunderstanding right so you have some way\nof understanding facts about the world\nthink patterns in the world things that\nyou know about how the world works and\nthat's distinct from okay given that I\nknow these facts about the world you\nknow and I'm trying to achieve you know\ngoal X I choose to Output you know\naction a right and so we have some\nunderstanding that is like okay\nwe really want to separate those things\nwe want to separate your understanding\nfrom the way you use that understanding\nto produce outputs and so we sort of\nwant to call the understanding part the\nworld model now I think that that\nseparation is a little bit fraught it's\nnot clear that it's always even possible\num but oftentimes it is a really useful\npiece of analysis and so we'd like to be\nable to do it\num\nbut it is I think it is really tricky\nokay well uh we'll call it there so uh\nyeah hopefully that was that was good", "date_published": "2023-05-13T15:57:02Z", "authors": ["Evan Hubinger"], "summaries": []} +{"id": "028288eed8170c7696abec516c28f0f1", "title": "1:AGI Safety: Evan Hubinger 2023", "url": "https://www.youtube.com/watch?v=NmDRFwRczVQ", "source": "ai_safety_talks", "source_type": "youtube", "text": "I am Evan humier I'm a safety researcher\nat anthropic and I'm going to be talking\nabout AGI safety so\njust a little bit about me before we\nstart so I said I'm currently at\nanthropic I do empirical and theoretical\nAI Safety Research there uh before that\nI was a research fellow at Miri the\nmachine intelligence Research Institute\nand I did other stuff for that at open\nAi and other places\nokay so uh essentially what we're going\nto try to talk about is I want to teach\nyou over the course of this whole uh\nsequence how I think about existential\nrisk from artificial intelligence\nso what does that mean so existential\nrisk we're sort of imagining a situation\nwhere humanity is in some sense\ndisempowered where uh the you know the\nfuture is no longer sort of in our\ncontrol in some respect that could be\nExtinction it could be every single\nhuman is dead uh or it could be you know\nsome other scenario where we lose\ncontrol something there where maybe you\nknow there are still humans around but\neffectively those humans have you know\nno control over the the course of the\nfuture\nokay and we're going to be focusing on\nprosaic artificial general intelligence\nso what does that mean so prosaic means\nwe're imagining a situation where we get\nto very powerful Advanced AI systems\nwithout sort of any fundamentally new\nadvances in how we're thinking about\nintelligence\nuh that means we're essentially going to\nbe looking at machine learning uh the\nsort of you know current broad Paradigm\nof how we do Ai and imagining that that\nis in fact the thing that gets us to uh\nvery powerful AI systems and this sort\nof notion of very powerful is you know\napproximately something like this notion\nof uh artificial general intelligence\num we're not really going to rely too\nheavily on this sort of generality\nnotion but essentially the idea is you\nknow uh some system that is\napproximately as capable as a human\nacross a wide variety of domains\num\nand so that's sort of what we're what\nwe're looking for we're going to try to\nunderstand you know what these systems\nmight look like how they might be\ndeveloped uh and why they might be\ndangerous\nand the hope is that you know this\nsequence should be accessible regardless\nof you know what sort of prior knowledge\nyou're coming into it with uh so you\nknow we're gonna really try to cover as\nmuch as we can uh you know to help\npeople understand\nokay so this is the broad outline for\nthe sorts of things we're eventually be\ntalking out of the course of the whole\nsequence\num today we're just going to be doing\none and two\nokay okay\nso machine learning so I think\num any situation where you want to try\nto understand what is you know going to\nhappen with you know current very\npowerful AI systems and you know what\nthose systems might do in the future we\nhave to start by understanding what it\nis that the you know the mechanism that\nactually produces the system how does\nmachine learning work uh you know\nfundamentally so\nnow I have here what is sort of our\nprototypical machine learning algorithm\nthis is going to be you know sort of\nwhat we're what we're thinking about\nthis is a little bit intimidating\num that's fine we're gonna sort of you\nknow I'm gonna really talk through you\nknow everything that's going on you\ndon't have to understand what I put up\nhere but essentially what this describes\nis the process of machine learning which\nis we have some function and that\nfunction is parameterized so there's a\nbunch of parameters that describe how\nthat function operates so you know for\nexample we could think about a linear\nfunction is described by two parameters\nit's described by The Intercept and the\nslope\num among other ways described lines and\nwe're imagining that you know those\nparameters associate those parameters\nare determining how the function works\nand we get to search over what possible\nparameters we want to find the function\nwhich results in the behavior we're\nlooking for\nand in practice the way we do this is\nvia gradient descent or we have this\nparameterized function this function\nthat has you know in practice well\nMillions potentially billions you know\nmany many possible parameters that\ndetermine exactly how it operates and we\nsearch over the space of the settings of\nthose parameters to find a sort of\nparameter setting which results in good\nbehavior on some uh data distribution\nand some loss function that sort of\ndescribes what we're looking for on that\ndata distribution and we do that search\nfor your gradient descent where we at\neach individual Point sort of calculate\nthe derivative of the loss and and step\nin that direction\nokay so so how do you think about this I\nmean it's a it's a you know structurally\ncomplex algorithm I think the best way\nto sort of conceptualize and think about\nwhat machine learning is doing is by\nthinking about lost landscapes\nso what is this so well we can imagine\nyou know if you look at each individual\nparameter as we're sort of varying those\nparameters and searching over the space\nof possible parameters\num we can plot you know how well does\neach particular setting of parameters\nperform uh on our on our loss on our\ndata you know what is does that\nparticular setting of parameter result\nin good performance bad performance what\ndoes it do and so that's what we have\nhere we have a sort of dimensionality\nreduction we've looked at a couple of\nparticular neural networks uh you know\nsorts of uh parameterized functions that\nwe often use in practice and we want to\nunderstand for those particular\nparameterized functions for each\nindividual setting of parameters how\nwell is it doing so you can see for\nexample over here there's a bunch of\nthese sort of values and nooks and\ncrannies where you know individual you\nknow parameter points might do really\npoorly you know like out here in the red\nor they might do really well you know\ndown in the valley in the blue\num and the process of grading descent is\neffectively rolling down these slopes so\nwe have this sort of lost landscape that\ndescribes you know what the different uh\nparameter settings do in terms of how\nwell they perform and we're searching\naround this space sort of rolling down\ntrying to find these values that do that\nhave parameter settings that perform\nreally well\nthere's a couple of things that I that I\nwill talk about about this this sort of\nparticular setup because it's a little\nbit\num disingenuous in a couple of ways but\nyeah question first\nso what exactly are we looking at here\num I don't see any like x y z axis\num descriptions and also yeah this\nthese parameters which layer are we\nlooking at in these different networks\num yeah\num I'd just like to have a clearer\nvision of what I'm looking at here yeah\ngreat great question so uh a couple of\nthings so first is\num\nin terms of like what were the axes uh\nwe can think about the the depth here\nright is describing the loss it's\ndescribing and the loss just means how\nwell do you perform on the data set so\num we'll talk about you know some\nexamples of you know what that might\nlook like in just a little bit but\nessentially where you know we have some\nset of data and we want to find a\nfunction which does something on that\nset of data we have some desired\nbehavior and and this sort of\nessentially is the you know the depth\nhere is determining how good is the\nbehavior of that particular set of\nparameters and the axes are different\nparameters so you can imagine I think in\npractice these particular case is a\ndimensionality reduction but um you can\nsort of think about you know this being\ntwo parameters so we have you know one\nparameter on X and one parameter on y\nthat describe you know two specific\nparameters that we're searching over so\nif this was like a linear function right\nand we in fact just had two parameters\noverall you know then you know one of\nthese would be all the possible values\nfor the slope the other would be you\nknow all possible values for the\nintercept and we would just be looking\nto see you know which combination of\nslope intercept values results in the\nbehavior that we want\num and so that's sort of what we're\nlooking at here in in fact\num you know like I said it's not quite\nthat so it's a dimensionality reduction\nand also\num you know because in practice you know\nthe number of parameters is really\nreally big uh we have we're you know\nmillions billions of parameters in these\nsorts of largest Networks\nand so you can't just sort of represent\nit you know in this sort of\nthree-dimensional you know you need to\nbe able to see in millions of Dimensions\nto really sort of understand what was\nhappening here but\nsame sort of thing you know replicated\nacross many dimensions you know in each\nindividual parameters that we're looking\nat there's going to be you know some you\nknow situations where we can sort of\nfall into valleys in that parameter\num and and and and you know these these\nsame sort of shapes arise now there are\nsome important facts about the fact that\nit is so large dimensional\num because it is in fact really large\ndimensional there are some things that\nsort of your intuitions about lower\nDimensions won't quite work so you know\none of the things that we'll you know\nwe'll see is that it really you know\nchanges the volumes of these basins so\nyou know we could think about you know\nthis you know falling into this\nparticular area you know all of these\nsort of things around here if the you\nknow we follow the gradient and we just\nsort of fall down they're all going to\nfall into the same Basin here\num but thinking about volumes of basins\nin very high dimensional spaces is sort\nof counterintuitive because volumes\nrapidly expand with radius when you have\nvery high dimensional spaces so you know\nif we you know in a you know\ntwo-dimensional space right you know\nit's proportional to r squared the\nthree-dimensional space it's our Cube\nyou know what an n-dimensional space in\na million-dimensional space you know the\nrate the volume is proportional to you\nknow R to the millionth power and so uh\nyou know we end up in this sort of\nsituation where some things can start to\nget a little bit counterintuitive\num but but essentially the idea is that\num you know we're trying to you know\nlook for these sort of you know\ndesirable basins Yeah question\nuh yeah go for it oh just to summarize\nthe blue part is where the loss is\nbetter or like lower and that's where we\nend up right and then\num yeah perhaps as a follow-up question\nwhy the heck is the thing on the left so\nrugged and like that's a very good\nquestion I haven't talked about that yet\nI will talk about I think it's not I\nthink it's kind of an interesting uh\nthing to note I'll just mention briefly\nI think it's not super important but\nbasically what's happening on the other\nside here is this is a particular\ninstance of a case where these are two\ndifferent architectures where they're\ndifferent functions that have been\nparameterized in different ways\num\nin particular the actual difference here\nis that the smooth one has something\ncalled residual connections and the\nrugged one doesn't have residual\nconnections\nin practice what residual connections do\nis they they substantially smooth out\nthe landscape now it's a little bit\ndisingenuous to say that this is like\nyou know fully smooth uh because it's\nvery zoomed in so if you were to take\nthis picture and like you know Zoom way\nout on the Lost landscape you know even\nin the smooth one you would start to see\nyou know you know more you know valleys\nand stuff uh you know as you look at\nmultiple different basins but what it\ndoes do is it makes it overall\nsubstantially smoother right so at the\nsame scale we can see that with residual\nconnections things are much smoother and\nI think that is one way of describing\nwhy we use original connections right\nI'm not going to talk about all of the\nindividual different architectural\nfeatures that modern you know neural\nnetworks use but I think one sort of\nGeneral point is that when people build\nyou know machine Learning Systems when\nthey choose their architectures what\nthey're trying to do in picking you know\nwhat that that parameterized function\nlooks like is they're trying to get lost\nLandscapes that look like that one on\nthe right they're trying to get these\nnice lost Landscapes that have good\nproperties\num in terms of being easy to optimize\nover having good Minima that can sort of\nfit all the data\nthe problem is that that task is really\ntricky because we don't actually\nunderstand the sort of mapping between\nhow do I choose a function and you know\na parameterized function and how do I\nwhat sort of lost landscape do I\nactually end up with\num and and that sort of Disconnect can\nbe quite tricky a lot of times\num so yeah we'll talk about that a\nlittle bit Yeah question so uh you said\nthat as the dimensions go the volume of\nthe basins go like r squared r to the\n33rd and so on but what is art are\ngenerally what do we mean by the basins\ngrowing like in as proportion of the\nfootball\nof the whole parameter space\nthe proportion is going off the space\nfrom which you end up in one particular\npoint yesterday yes this is a really\ngood question so I haven't I haven't\ntalked too much about like what I mean\nby Basin yet but essentially we can sort\nof think about it like this gradient\ndescent right likes to roll down these\nHills you know it finds situations where\nyou know we start you know maybe at some\nrandom point so I'm setting up\nparameters it doesn't really describe\nany particular you know meaningful\nbehavior and then grading descent you\nknow looks at this lost landscape and it\ngoes down it tries to find these points\nthat actually have good behavior and a\nbase that is sort of described as the\nvolume where all of the points roll to\nthe same point right they all sort of\nroll into approximately the same area\nand so if you think about you know here\nwe have multiple basins right we've got\nthis like Valley on the left and then we\nhave this like big bass on the right and\nthose are you know if you start on one\nside you end up in different situations\nyou know on this side they're also the\nsame base and this is just looking at\none basin\num and so R is just sort of the\ncharacteristic size the characteristic\nlength of the Basin you know for example\nyou could think about it as the distance\nthat it takes to roll into the basement\num\nit's getting a little bit technical it's\nnot you know super relevant to you know\nthink about all this but basically you\nknow we're thinking about okay we have\nthis you know big we have this\nparameterized function it has this\nreally big space of all these possible\nparameters uh and in that space there\nare these basins that describe various\ndifferent algorithms that the you know\ncould be implemented by that function\nand uh we sort of want to understand you\nknow which of these basins do we\nactually end up with when we do this big\nsearch\num and we'll talk a little bit in a\nlittle bit about you know what are some\nfactors that actually make that\ndetermination\nis there a concrete example of like what\nthese parameters might look like in\norder to help us get an intuition of\nwhat this kind of system actually is\nyeah that's a great question\num I'll talk a little in a little bit\nabout some like actual algorithms that\nthe functions might Implement and what\nthose might look like\num in terms of what the parameters are\nthey're just numbers right you know uh\nthey're just uh you know they're just\nfloating Point values that you know they\nwe use the Matrix multiplications and\nall of you know in not just in you know\nall sorts of various different ways that\nthere may that the function makes use of\nthem I mean you can think about like you\nknow I don't want to go into too much\ndetail on like exactly the structure of\na neural networks because I think it's\nnot super relevant I think the basic you\nknow some you know we'll talk you know\none thing that is relevant is just like\nyou know they do sequential computations\nso uh the you know there's a particular\ncomputation which is performed uh and\nthen you know based on the results of\nthat computation performs another\ncomputation and each in each computation\nis parameterized so you know exactly\nwhat computation it performs is just\ndependent on whatever the values of the\nparameters are that they determine how\nit does it um exactly what those\nparameters are doing and how they work I\nthink is not super important\num\nyeah I mean and and many of them are\nused in different ways you have biases\nyou have you know I mean we're talking\nabout like Transformers you have\nattention heads so there's a lot of\nvarious different parameters that do\ndifferent things but basically it's just\nsome massive parameterized function\num and one of the things also that I\nthink is you know maybe important to\npoint out is that when you have very\nvery very large networks as the sort of\nsize uh increases generally they become\ncapable of implementing you know\nessentially almost any function and uh\nthe the functions that which you can\nImplement increase as you increase the\nsize so with really large um you know\nsets of parameters you can start finding\nalmost any algorithm which is uh you\nknow could theoretically be implemented\num and we'll see later that's really\nimportant the fact that as you increase\nthe size of your network as you increase\nthe number of parameters the number of\npossible algorithms which you can\nImplement increases and you gain the\nability to implement new fundamentally\nnew sort of structures\nokay I'm going to move on I think that\nmaybe this is getting a little bit\nconfusing and hopefully it'll get a\nlittle bit less confusing as we talk\nabout some more stuff so\num\nokay great so here is a concrete example\num of you know a situation where you can\nhave a sort of machine learning system\nand it finds different basins\nso we're going to uh here's what we're\ngoing to do we have these sort of two\nshapes we have the blue blocks on top of\ntriangles and we have the red Pac-Man\nwith a cape\nand the idea is we are going to try to\nsort of train a network that is we're\ngoing to search over possible you know\nparameterized functions until we find a\nsetting of parameters that in fact is\nable to when given the you know Blue\nBlock always you know result in Blue\nBlock and when given the red Pac-Man\nwith a cape it always results in you\nknow packing with a cape\num and then we want to see we want to\nask what happens if we take that uh you\nknow function that we've just learned\nand we see what it does if we give it\nyou know the swapped colors so we what\nif we gave it uh you know red block on\ntop of pyramid or a blue Pac-Man with a\ncape and the interesting thing here\nright is that there's sort of at least\ntwo different algorithms which you could\nlearn which would do a good job on this\nparticular data right there's at least\ntwo basins uh that describe different\nsorts of ways that you could fit this\ndata you could learn a color classifier\nthat classifies any red shapes as you\nknow one thing and any blue shapes\nanother thing or you could learn a shape\nclassifier you could learn the shape\nclassifier that classifies the Pacman\nwith a cape in one bucket and you could\nclassify the blocks on top of pyramid\nanother bucket\nand so we can sort of think about this\nin the Lost landscape setting as\nrepresenting two sort of distinct Basin\nthat describe uh two different possible\nalgorithms that you could find when\nyou're doing this search over uh you\nknow function parameterizations\num and when humans do this when you ask\nhumans to sort of you know what would\nthey sort of classify humans generally\nwill go with the shape though it's a\nlittle bit unclear there's sort of you\nknow a reason for this you know in\npractice you know if I see a red chair\nor a blue chair\num you know they both sort of\nfunctionally serve as chairs and so we\nyou know we tend to classify objects\nbased on their shape but a fact\nmatch on this task uh we'll almost\nalways pick the color classifier I\nalmost always learn to classify the cut\nyou know the red things it's one bucket\nand the blue things into another bucket\nand that's interesting right you know it\ntells us something about what it is that\ndetermines you know which basins are\nselected for you know which\nparameterization uh you know which\nsettings of the parameters are the ones\nwhich actually get uh implemented in\npractice uh you know by these networks\nwhich which sorts of ones do they find\nand in this case at least you know we\nknow that what they find is they find\nthe color classifier\nand and this sort of difference you know\nthe like okay there's multiple different\npossible algorithms which could learn\nthe data but in fact we find one\nparticular algorithm which fits the data\nis in some sense the key to what makes\nyou know machine learning so powerful it\nmakes it work so we think about uh right\nso we can go back to you know thinking\nabout these basins right we can describe\neach of these different algorithms it's\noccupying you know these different\nbasins in the Lost landscape now in some\ncases you know there will be algorithms\nwhich don't even exist in our lost\nlandscape because our model sort of\nisn't big enough to implement them but\nin cases where our model could\ntheoretically Implement either one we\nhave these multiple basins and you know\nthe machine learning algorithm has to do\nsome Basin selection that determines\nwhich of these different algorithms that\ncould fit the data do we actually end up\nfinding\nokay there's some research that shows\nthat you know sometimes there are\nsymmetries between basins\num but most the time we're sort of going\nto be imagining that we're going to be\nlooking at a sort of decimetrized basins\nwhere we have\num you know different basins describing\nfunctionally different algorithms\nyeah\nuh you described as like the algorithm\nchoosing which Basin to use but isn't it\nthe case from what you said before that\nwe start out at some random point and\nthen we don't live with any control over\nwhich Basin we wind up in it's just\ngoing to roll down help yes that's a\nreally good point I think that\num\nin practice it's a little bit unclear\nthe extent to which you always end up in\nthe same Basin or you end up in\ndifferent basins depending on the\ninitialization it really varies based on\nthe particular machine learning setup I\nthink one thing I will point out though\nis that some things are very over\ndetermined so if we think about the\ncolor versus the shape classifier in the\nprevious example\nthe the fact that you learn the color\nclassifier in that case is extremely\nover determined you never learn the\nshape classifier when you start uh\ntraining from scratch on that task\nand we can talk about you know why that\nis but I think that the thing I want to\npoint out that's important there is that\num even if I randomly initialize\nSometimes some basins are so much larger\nand so sort of preferred by the grading\nconsent process that we effectively\nnever find the other basins so in that\ncase um you know and why is this well we\ncan think about you know previously\nright like I was talking about you know\nwhat determines the base and volume\nright well in these really really high\ndimensional spaces Basin volume you know\ncan vary drastically between different\nbasins because\num even small differences in this you\nknow radius of the Basin can have\nmassive changes on on the total volume\nand so because of that\num you know you can have cases where you\nknow some algorithms occupy like 10 to\nthe 20 more volume than other algorithms\nand there's sort of no chance no sort of\nyou know chance uh that you could ever\nfind the smaller Basin uh when that's\nthe ratio that you're dealing with\nbut that's not always the case sometimes\nthere absolutely are cases where there\nare multiple different algorithms that\nyou you know that are sort of both\nplausible and it'll depend on the\ninitialization you know which random\npoint you start with uh what you'll end\nup finding\ndoes that also depend on the data set\nthat we're provided like for example if\nwe provide different colors of the shape\nso if we know a priori that we're\nlooking for a shape classifier not a\ncolor classifier for whatever clyrus and\nthen all the confounders could have been\nadvanced\nyeah that's a great question so uh if\nyou uh give it a bunch of instances of\nyou know cases where you have the same\nshape but a bunch of different colors\nand you tell it to find an algorithm to\nclassifies All Things based on shape and\nignores the color you absolutely can\nlearn shape classifiers\num but the question and so it totally\ndepends on the data set\num in this case though we're sort of\nasking well you know what if we don't\ngive it that information what if we sort\nof don't say whether we're asking for\ncolor we're asking for shape we sort of\njust want it to figure out what is you\nknow what is the best algorithm for\ndistinguishing these two things and then\nwe sort of ask you know what what it\nlearns in that case\num and you know I'll talk about a little\nbit but I think that that distinguishing\nyou know the ability of the machine\nlearning algorithm to pick you know\nwhich algorithm does it like better to\nsolve the the problem is really critical\nto why these things are able to do what\nthey're able to do because\num you know you can imagine if they just\nmemorize the data exactly at every point\nyou know that would do a good job at\nfitting the data you know any data you\ngive it if it just memorizes every data\npoint it'll always be able to do you\nknow 100 perfect performance but but\nit's useless right you know a 100\nmemorizer that has no ability to do\nanything coherent on any new data points\num doesn't do anything you know\nstructurally useful for you\num and so the fact that machine learning\ndoesn't do that that we don't just learn\nthis you know memorizer that we learn\nsomething that has an interesting\nstructure that actually implementing\nsomething you know relevant like you\nknow distinguished based on color\num is what makes it powerful and useful\nis it possible for\nto to implement the same algorithm that\nwith different weights such that I mean\nif we are in such case we have different\nrates but basically the same algorithm\ndoes it mean that we have learned into\nthe same vessels or into different\nvessels bits which are equivalent in\nterms of plus yeah that's a really\nreally good question so in fact there's\na there's some research that shows that\num that fat the fact that there can be\ncases where the same set of Weights will\nimplement or start different sets of\nWeights will implement the same\nalgorithm\num is a really important facet of which\nbasins sort of end up being larger and\nwhich base instead of being smaller\nwhich algorithms sort of end up being\nfavored because if you have an algorithm\nand the same basic algorithm can be like\nimplemented in a really really large\nnumber of different ways then that\nalgorithm becomes much easier to find\nand so it becomes favored by the you\nknow the gradient descent process uh you\nknow it becomes you know that sort of\nbase and becomes very large because\nthere's all of these different sort of\neffectively equivalent parameterizations\nwhich result in the same uh you know\nfunctionally equivalent sort of model\nand so we think about something like the\ncolor classifier one of the ways we can\nunderstand you know why does it learn\ncolor is well color is you know an\nalgorithm that is sort of relatively\neasy for it to calculate based on the\nRGB values that it's sort of taking in\num and there's a bunch of you know other\nstuff that it you know doesn't it\ndoesn't have to it doesn't really matter\nyou know it only takes a small number of\nparameters the rest of the parameters\ncan be set to essentially anything\nthere's a bunch of ways to you know\ncalculate the color looking at different\npixels looking at different places on\nthe image and so it's not sort of it's\nIt's there's so many different ways to\nimplement it and it's so simple based on\nthe data that it sort of input that it's\nreceiving\num that it sort of ends up being you\nknow a substantially larger base in the\nsort of favored by default\num yeah\nso one thing I read last year was this\nidea that the algorithms will most\nlikely to be selected with the most\ncompressible because they could be done\nin the fewest parameters letting the\nother parameters be basically whatever\nthey wanted is that related to this idea\nof pace and volume yeah absolutely so uh\nI think that we're going to talk a bunch\nabout why Simplicity is you know a\nreally important component of you know\nwhat what's the lacks which algorithms\nyou end up learning and simplicity you\nknow functionally is essentially the\nsame as compression it's just you know\nhow you know can I can I take it can I\nyou know find some really functionally\nsimple algorithm\num that is able to explain all of my\ndata that's sort of what a compression\nis\num\nand so compression is absolutely you\nknow important part of what's Happening\nHere Right\num now how it maps from you know symbol\nalgorithms to large basins and things\nwhich are favored by greeting descent is\na little bit complex there's a bunch of\ndifferent things that go into that so\none really important facet is what I was\njust talking about which is the fact\nthat when you have\num you know a sort of structurally\nsimple algorithm it leaves a lot of\nparameters un you know untouched or like\nnot relevant or there's like a bunch of\ndifferent ways to implement it which\nmeans there's a lot of different points\nin the weight space which all correspond\nto effectively the same algorithm and\nthat and that you know means it has a\nvery large Basin um so that's one of the\nfactors but there's other factors as\nwell one other factor is that we often\nwill do explicit regularization where we\nwill actually like take the function and\nwe will explicitly say you know we want\nfunctions which do better on the metric\nof being small and simple\num and the reason we do that is the same\nreason we did the like residual\nconnections right in the previous thing\nwhere we just it's we really you know\nthe reason that machine learning is so\npowerful the reason we want to do it is\nbecause we're hoping to get these simple\nstruct you know structurally simple\nalgorithms out of it and the best way to\nget those structurally simple algorithms\nis by you know using these techniques\nthat help us find\num you know structures that\num in fact result in these you know\nsimple algorithms and so that's why we\ndo suffer great realization that's why\nwe add things like residual connections\nto get these um you know lost Landscapes\nthat have you know the sort of simple\nalgorithms favored\nI think I'm wondering about now is when\nwe got these bases is there a way to\ntell which algorithm they will implement\nmy current model of this looks like oh\nwe've got these points on like these\nbasins in parameter slash lossland the\nin the last landscape and then those all\nkind of show which parameterizations\nperform well on our different tasks so\nfor the classifier uh the the log basins\num will basically result in a classifier\ndoes dot has a high accuracy\num but the way the way it does that is\nstill kind of obscure to us right does\nit do it by Form classification or like\nwell through the the colors and now my\nquestion is is there a way to actually\ntell just by looking at the Basin what\nkind of algorithm it is implementing\nthat's a really really good question and\nI think the unfortunate answer is\noftentimes no\num you know if only thing I know is okay\nit's some setting of parameters and that\nsetting of parameters looks like it does\na good job on the data that's often all\nI know you know I know that in some\nsense you know okay I know that this is\nthis what every algorithm I found\ncorresponds to a large base and it\ncorresponds to some sort of structurally\nsimple algorithm but I don't often know\nwhat algorithmic corresponds to now\nsometimes you can figure it out so in\nthe shape you know color example it's\npretty easy to figure out because we\njust give it a new example you know we\ngive it a case where the shape and the\ncolor are different and we see what it\ndoes right and so we can tell you know\nin that case you know what it's doing\nand sometimes you can do that you know\nto tease about different uh differences\nand algorithms but sometimes that gets\nreally hard you know there's just a lot\nof various different things to test on\nand it can be very difficult you know\nform hypothesis about what it's really\ndoing so we'll see in a little bit you\nknow an example of a case where we can\ndo transparency we can actually sort of\nlook inside the model and you know see\nwhat algorithm is influencing in a\nparticular case but that can also be\nreally hard because oftentimes you know\ninterpreting what these parameters are\nactually doing is really difficult\nbecause you know they're not selected to\nbe interpretable they're just selected\nto be whatever you know is the low point\non the Basin you know whatever set of\nparameters and fact results in good\nperformance and so you know there's no\nreason that they we would be able to\nunderstand what they're doing in some\ncases we can though in some cases we can\nsee ah it's implementing this sort of\nsimple you know algorithm that we\nunderstand let's see an example of that\nin a little bit but you know that's not\nalways the case sometimes we can\nunderstand and sometimes it's really\ndifficult you know hopefully if you know\nthe fields of transparency progresses we\ncan get to the point where we can sort\nof always look at a you know set of\nparameters and understand what it's\ndoing but in in general right now there\nisn't really a way for us to do that\nokay so I I talked about this already\nbut the thing that is so important about\nyou know this Basin selection about this\nidea of you know figuring out which\nsetting of parameters you want is you\nknow structurally this is the thing that\nmakes machine learning uh you know so\ngood what makes it what it is because if\nI imagined an alternative process right\nthat you know just took you know uh you\nknow I saw these red data points right\nand I was like okay here's how I'm going\nto fit the Red Data lines right I do\nthis you know crazy blue you know line\nright that goes up and down\num you know we know that that line is\nwrong right wherever this data came from\nwherever you know these I collected\nthese red data points from it probably\nwasn't from that distribution it\nprobably didn't come from a line looked\nlike that blue line\num and we know that because you know\nsomething like Occam's razor right you\nknow in fact real world data you know\nreal world patterns that actually exist\num tend to have these simple\nexplanations they tend to have you know\nuh generating uh procedures behind them\nthat they have some structurally simple\npattern that describes what's going on\nand so the magic of machine learning the\nthing that's so powerful about it is\nthat we don't just find any function\nthat fits the data right we have these\nprocedures for finding simple functions\nthat fit the data functions that are\nsort of structurally simple explanations\nfor what's going on you know this green\nline here\num that are actually likely to do a\nreasonably good job if we give it a new\ndata point\num and so you know what's simple means\nis a little bit weird right so it's not\nthe same as always what symbol means to\na human so if we think about the you\nknow color versus shape classifier you\nknow humans will often pick the shape\nbut the the you know the model will\nalmost always pick color\num and so you know it's not quite the\nsame but it is you know something that\nis you know very important here because\nit is we have selected our machine\nlearning models we found the\narchitectures that in fact result in\nfinding things that are simple in the\nsense that they do a good job simple in\nthe sense that they actually fit real\nworld data they actually describe real\nworld patterns\nokay\nokay so I promised an example of a sort\nof you know transparency of a situation\nwhere we can actually look at what these\nsorts of simple algorithms you know in\nfact look like in practice when they're\nimplemented in you know a neural network\num so this is an example of a case where\nwe took you know a very large\nparameterized function in this case a\nconvolutional neural network and it was\ntrained on uh imagenet which is a class\nuh a problem we were trying to classify\na bunch of different uh images so you've\ngot cars and cats and dogs and you have\nto be able to distinguish between each\nindividual one and tell uh you know\nwhich is which is which and so we've\ntrained a very large Network to do that\ntask and we want to know what's it doing\nright\num in this particular case is you know\nwe're trying to understand how it\nclassifies carbs so it has one\nparticular point in the computation\num where we can sort of ask okay what\nimage would most you know cause this\nparticular part of the computation to be\nlarge you know to really uh you know uh\nactivate\num and we find this sort of you know\nimage that sort of looks like a car and\nuh you know the conclusion is okay this\nis sort of where it's doing the\ncomputation that determines is the thing\na car or not and so you can sort of\nthink about this image as the maximally\ncar image this is you know the image\nthat this neural this neural network\nthinks is you know the most possible car\nand um an interesting thing from this is\nthat uh you can sort of see I think if\nyou if you sort of squinted this uh what\nit's doing right like what what actual\nalgorithm is it implementing uh for car\ndetection\num if you have not seen this before you\ncan try to think a little bit and guess\nas to what it might be\num so I'll give you I'll give a couple\nseconds to just sort of think about that\num but I will reveal the answer because\nwhat we can do is we can actually look\nat the inputs so we can see okay because\nthe algorithm you know operates uh you\nknow the particular parameters function\nthat we found it operates in these\nsequential computations it first does\none computation and then it does another\ncomputation we can see you know okay\nwhat are the computations that happen\nbefore this and what are the sort of\nimages that maximally activated those\ncomputations\num and we can use that to help us\nunderstand what this what this\ncomputation is doing\nand so if we look uh we can see and I\nthink it's pretty clear based on this\nwhat it's doing so the images on the top\nare sort of what it's looking for on the\ntop of the image the images on the\nbottom or what it's looking for on the\nbottom of the image\num and it's pretty straightforward it's\nlooking for Windows on the top and\nwheels on the bottom and that's what a\ncar is to this neural network it's\ndetermined the algorithm it's found this\nsort of structurally simple algorithm\nfor finding cars which is look for\nWindows and then look for Wheels yeah\nthose blank things in the center are\nthose just not shown or does that mean\nthe algorithm doesn't care what's in the\ncenter\nit means it mostly doesn't care it's\nsaying that uh whatever is in the center\nof the image it's not really looking at\nit's not using that to determine uh you\nknow whether it activates or not\nand so uh yes you can see in this case\nit's really looking you know very\nconcretely for two very specific things\nit's you know we have in the previous\ncomputation we you know we did some\ncomputation to determine what a wheel is\nand it did some computation determine\nwhat a window is and looking to see you\nknow is there a window on the top and is\nthere a wheel on the bottom\nokay so this is pretty cool right it's\nshowing us you know this concrete\nexample of you know something you know a\nlittle bit more complex than just the\nyou know um you know the color the shape\nexample where uh our neural network you\nknow we found a particular\nparameterization by searching this big\nparameter space to find some setting of\nparameters that is you know uh you know\nhas some large Basin and does well on\nthe task and we found this structurally\nsimple algorithm right we didn't just\nfind some you know memorize every\npicture of a car you've ever seen we\nfound some algorithm that actually you\nknow he's doing some structurally\ninteresting thing\num that is able to generalize right this\nis an algorithm that if I give you a\nrandom new picture of a car it's often\ngoing to succeed you know sometimes it\nhas you know cases where it might fail\nyou know maybe I've removed all the\nwheels from a car and while it's still a\ncar\num and in that case you know maybe this\nwould this would fail but in general you\nknow this is a this is a structurally\nyou know uh simple algorithm that is\nactually performing the task in\ngenerality\nand that's what we're trying to get\nright that's the thing that machine\nlearning is doing it's trying to find\nthese structurally simple algorithms um\nthat are able to solve the task in\ngenerality so that you know you can find\nalgorithms that actually generally do a\ngood job\nokay\nso um\nhe I want to talk about another thing\nthat's a little bit interesting about a\nparticular fact about this you know\nnotion of simplicity because I think\nit's sort of counterintuitive\nwhich is that this sort of notion of\nSimplicity right of finding these sort\nof structurally simple algorithms to\nsolve tasks is something that actually\nuh when we when we build larger networks\nwhen we build bigger networks with more\nparameters and more data\num well when we build larger Networks\num we often are doing it for the purpose\nof finding simpler algorithms and this\nworks right when we have\num larger networks we actually find that\nthey do often learn structurally simpler\nalgorithms so this is a little bit\nconfusing right so you know okay they\nhave more parameters right so in some\nsense you know a large network with more\nparameters it you know it requires more\nthings to describe and so it must be\nmore complex right\num and in some sense that's true right\nit is the case that if it has more\nparameters it takes more things to\ndescribe it's sort of more complex but\nit might be learning some algorithm\nwhich is structurally more simple so you\ncan think about let's say I had a\nnetwork that all it could do was you\nknow one sequential computation right it\ncan't learn if it can only do one\nsequential computation it can't learn\nsomething like look from Windows on top\nof Wheels because that's an algorithm\nthat really requires you to be able to\nFirst do the wheel and the window\ncomputation and then combine those into\nthe you know windows on top of Wheels\ncomputation and if you're only doing\nsomething very you know short and simple\nyou can't learn that algorithm or you're\ngoing to have to learn some sort of uh\nyou know you know maybe more Brute Force\nyou know memorize these sort of thing\nand that sort of brute forcing\nmemorizing algorithm isn't structurally\nsimple in the way that the windows on\ntop of Wheels algorithm is and so in\neven though the windows on top of Wheels\nalgorithm maybe takes more parameters to\ndescribe the actual sort of fundamental\nalgorithm that it's implementing is\nsimpler it's doing something that is\nsort of structurally uh you know simple\nand we can think about this as a sort of\nOccam's razor sense right so Occam's\nrazor says that in practice real world\ndata is described uh you know well by\nthe simplest algorithm that fits the\ndata and that's sort of what we want\nright if we believe Occam's razor and we\nhave some data and we want to do you\nknow find the real world pattern behind\nthat data then we want the simplest\nalgorithm that fits that data and when\nwe have larger networks you know we we\nsort of increase our search space now we\ncan search over even more possible\nalgorithms\num and that unlocks our ability to find\nalgorithms that are sort of structurally\nsimpler than any of the algorithms that\nwe previously were able to search over\nand so we can think about it as well\nwe're still trying to find the simplest\nalgorithm but now we're trying to find\nthe simplest algorithm in a larger space\nand that means we can do a better job\nbecause the simplest algorithm in a\nlarger space can be simpler than the\nsimplest algorithm a smaller space so I\nhave an example here of what that looks\nlike in practice but here question\nso does this mean that in general as\nmodels get larger they actually become\neasier for us to figure out what they're\ndoing and what's going on\nyeah that's a great question and the\nanswer is maybe I think it's really\nunclear so\num uh Chris Chris Ola who's the head of\ninterpretability at anthropic has a sort\nof hypothesis about this which is you\nknow what happens is first as you have\nuh you know networks like uh you know\nsomething super simple parameterized\nfunctions like linear regression right\nwhere it's just two parameters it's\nreally easy to understand because it's\nso you know the uh function itself is\nsort of structurally simple\num but then as we introduce more\nparameters it starts to get less and\nless uh understandable\num but the idea is like like like you're\nsaying as we keep adding parameters\nit'll start to get more understandable\nagain because\num you know we're starting to learn\nthese sort of simpler you know human\nunderstandable algorithms one concern\nthough and this is sort of what Chris\nhypothesizes will happen after that\npoint is that at some point you start to\nlearn algorithms which are sort of\nsimple in a human sense there are\nalgorithms which are structurally simple\nand their algorithms are structurally\nsimple in a way that we understand them\nthe sort of simple algorithms that we\nlearn that we use right when we try to\nunderstand things in the world but at\nsome point you start to surpass the you\nknow the best algorithms the human use\nuh you know the best algorithms that we\nuse that we understand and you start to\nget to algorithms that are structurally\nsimple in some meaningful way but in a\nway that is not the sort of algorithms\nthat humans often use to sort of\nclassify and understand the world and at\nthat point it's sort of unclear you know\nmaybe at that point it starts to become\nless interpretable again\num that's all conjecture though you know\nwe don't really know so\num what we have seen is that you know I\nthink\nthe sort of broad Strokes of what I just\nyou know sketched out about the the sort\nof starting part of that curve seems to\nbe broadly correct where a lot of really\nreally early algorithms are very easy to\nunderstand and it gets harder and then\nit gets easier again\nI think one tricky thing is that there's\nbeen a lot of that sort of work was done\noriginally on image networks like with\nthe you know windows on top of wheels\nand it gets trickier for other sort of\nmodels like language models which we'll\ntalk about uh later\num\nbut um but you know I think I think it's\nunclear and you know it's you know the\nbest answer I can give is just you know\nmore research is needed you know we need\nto you know if if we do more\ninterability and we start understanding\nmore about you know when is it the case\nthese systems are understandable we can\nlearn more about this and right now you\nknow the best I can offer is like some\nspeculation anecdotal evidence you know\nwhat sorts of things we've seen and what\ngeneral Trends we don't really know what\nwe're gonna what we're gonna find\nquestion oh you've got it already great\num so I'm going to interpret the graph\non top\num it looks like so it looks like at the\nbeginning perhaps the model is\nunderfitting and then there's this point\naround like 100 or 110.\num where the loss begins to rise again\nand then it at a certain point begins to\ndrop off and I guess I'm wondering is\nthe way to interpret this that there's\nlike two competing forces\num like the one where you described\nwhere a larger\nmodel is able to disappose\num search over more algorithm space\num and so you have like overfitting\nhappening at first and then this search\nfor this ability to search for more\nspace takes over at a certain point yeah\nnewer fitting stops is that so so I had\nnow for a while and I haven't talked\nabout it so that's my\nback so let me let me try to give a\nlittle bit of how I how I think about\nwhat's going on here so\num yeah so first just very briefly so\nwhat's on the top once at the bottom so\non the bottom we have training loss so\nwhat this describes is it describes how\nwell does the particular setting of\nparameters that we found do on the data\nthat has seen so far so on the data\nwe've actually given to it on the data\nthat we actually you know have\num available uh that we're sort of\ntraining it on how well is it doing\nand we can see that as we sort of change\nthe size of our of our model in a this\nis a particular type of size we were\nholding everything else fixed we can see\nthat as we increase the size it gets\nbetter at fitting the data right the\ndata that we've actually given it uh you\nknow given to it is able to fit you know\nwe're able to find some setting of\nparameters which does a good job on that\ndata\nokay but now we have to ask the question\nright how well does it do on data it's\nnever seen before and in this particular\ncase this is a translation model so\nwe're trying to get it to do a\ntranslation task\nwe want to understand you know if I give\nit a you know pair of sentences in you\nknow English and French that it's never\nseen before how well does it do on those\nsentences suppose the ones it has seen\nand the answer like you were pointing\nout is well it's a little bit weird so\nyou know it starts out doing very poorly\nbecause higher loss is you know worse\nperformance uh because you know it\nstarts out at some random algorithm and\nuh you know when it when it just has\nsome really small embedding Dimension\nright when it's just a you know a tiny\nmodel it basically can't Implement\nanything useful or interesting right and\nso it's doing the best it can but the\nbest it can is not something very very\nyou know meaningful but as we increase\nthe size of the model\num it gets drastically better and so we\nfind you know an algorithm that is sort\nof able to sort of fit and understand\nwhat's happening here and then it gets\nworse again so we start you know getting\ninto a regime where the performance you\nknow degrades and then as we keep\ngetting larger it starts improving it\nso what's happening here so a couple of\nthings so first let's start with what's\nhappening in this First Basin or that\nthis first sort of uh you know this this\npart here what's happening in this part\nis structurally where uh we started with\na you know we have a situation where\nit's not possible uh you know to\nactually find any setting of parameters\nbecause our thing is so small that fits\nthe data perfectly\num and so because we can't find anything\nthat fits that data perfectly it's sort\nof first it's sort of forced to find\nsomething which is simple uh via the you\nknow just the brute facts that it can't\nimplement the memorization right even if\nit wanted to memorize the data exactly\neven if it wanted to just you know\nmemorize every you know sentence it's\nseen it can't do that because then it's\nnot big enough right it doesn't have\nenough uh you know parameters to\nactually memorize the data and so it's\nforced to implement this sort of simple\nuh you know some sort of symbol\nalgorithm instead okay but then we give\nit more parameters right and what it\ndoes is once we give it a just enough\nparameters that it's able to fit the\ndata exactly where the training loss\ngoes to zero you can see that that is\nthe point where the green green\ncorresponds to green and purple\ncorresponds to purple that's exactly the\npoint where we get the worst performance\non the the test data set that sort of\nheld out data that's never seen before\nso what's happening well what's\nhappening is when we get to this point\nwhere it's just barely large enough that\nit can find something that is able to\nfit all the data well in that situation\nthere's basically only one thing that it\ncan learn it can learn to you know it\ndoesn't sort of have to have a choice\nfrom a bunch of different possible\nalgorithms where it can pick the\nsimplest one that fits the data there's\njust one algorithm it's just the you\nknow essentially a memorization\nalgorithm right it's just just barely\nlarge enough that it can memorize all\nthe data points and it can't do anything\nelse it's not big enough to be able to\nlearn some you know structurally simple\nalgorithm that is able to sort of uh you\nknow solve the data in a meaningful way\nand so because of that we just end up\nwith this poor performance but if we\nkeep making it larger we get better\nagain because if we keep making it\nlarger now there's a lot\nof all sort of have a good performance\nand it can pick from among them you know\nwhichever one actually is simplest\nwhichever one actually you know has the\nlargest Basin whichever one actually is\nsort of effectively doing you know\nsomething which is structurally simple\nin a way that is able to generalize to\nthese other you know new settings that\nit's never seen\nso I I will point out that you know a\nlot of this is is not super well\nunderstood\num you know a lot of the stuff I just\nsaid is is some amount of disconjecture\nI think that a lot of it though is\npretty well supported by the literature\non this question uh this is this\nspecific phenomenon is called double\ndescent\num and another thing I will point out\nalso is that in practice you don't you\ndon't always want to be in this regime\nso sometimes you'll want to be in this\nBasin you know in this case here where\nyou're early on and you're simple\nbecause you're just sort of too small\nand sometimes you want to be in the\nother case where you're simple because\nyou're too big\num but in either case you're sort of you\nknow selecting it to be simple now one\nthing I will say is even when you're in\nthis sort of early Basin where you're\nsort of simple because you're too small\num as your the amount of data that you\nhave increases the this Basin sort of\nmoves to the right and so you have to\nmake your model larger and larger and\nlarger and larger to stay in this space\nand also as you get more and more data\nand so\num it's still the case that we sort of\nas we get more and more data we sort of\nwant larger and larger models you know\nmore and more parameters\num even even if we're in this case and\ncertainly if we're in that case\nquestions\nuh so earlier you said that machine\nlearning models seemed biased against\nfinding this memorization based pattern\nso why does that seem to happen here and\nnot in other circumstances like the car\nexample\nyeah that's a great question so what's\nhappening here is we're sort of forcing\nit here we've sort of put it in a\nsituation where it's just barely big\nenough that it can fit the data if it\nlike devotes every parameter to\nmemorizing it exactly but it's not large\nenough it doesn't have enough parameters\nto to sort of represent anything you\nknow sort of more interesting than that\nand so because of that you know one of\nthe restrictions right is you can think\nabout the machine learning as well it\ntries to find this you know this simple\nBasin but it's always going to find you\nknow some Basin which actually results\nin good performance right if there's\nnothing you know if if all of these sort\nof possible simple things that it you\nknow that it can find that it has\navailable to it none of them actually do\na good job on the data well then it's\nnot going to find them right it's only\ngoing to find things that actually in\nfact do a good job and so we put into\nthis situation where it sort of he\ndoesn't want to find these sorts of\num you know things which are\num you know memorizing but uh We've sort\nof forced it to because it's sort of the\nonly option in the space available to it\nthat's sort of the theory for for what's\ngoing on there\nI'm still not quite so sure what happens\nwhen we extrapolate this even further so\nwe are cutting this graph off at a\nhidden dimension of about 500. but\ntoday's Transformers are much larger\nrights\num\nis there something on like a paper that\nlooks into what happens when we actually\ndrive it in higher and\nI had regard also on observation here\nsteps the test loss\ndoesn't seem to dip below its initial\nlike below point is that something that\nyou can make do with by increasing the\nheater size yeah if you if you keep\nmaking it larger uh at least along like\nit's a little bit unclear if you're just\nlooking at this one particular Dimension\nbut if you're just like making the model\noverall larger yeah you will eventually\nreach substantially better performance\nthan any other point by extending this\ngraph uh you know out in that direction\num I will say so in practice yes we do\ndo things that are much larger than this\nwe also do a lot more data than these\nwere trained on right so that that\nchanges the overall structure of the\ngraph\num\nbut yeah uh you know it is absolutely\nthe case that and and this is pretty\nwell studied um that you can sort of\nreplicate this um in many different\ncontexts\num though exactly where you'll find each\nindividual point will vary based on you\nknow the amount of data they're using\nand stuff like that um but yeah um you\nknow if you keep going you will start to\nget even better and better performance\num most of the time because\num you know the sort of compute\nefficient Point often will end up being\nrelatively early in this graph that's\nbecause you have huge amounts of data\nthat sort of push the whole thing out\nand then we sort of find you know\nsomething relatively early on that is\nable to you know fit the data\nyou know both of these are sort of valid\nstrategies for sort of finding simple\nalgorithms either being really early or\nbeing really late you just don't want to\nbe in the middle\num in both cases you know what you're\ndoing is you're sort of trying to find\nsomething which is you know structurally\nsimple and able to fit the data and the\nquestion is just sort of you know which\nis the best place to be on this graph to\nfind something which is you know\nstructurally simple now you know the the\nimportant point though you know the\ntakeaway sort of that I want here is\njust that\num when the reason we build these larger\nand larger models you know the the value\nof them is that they are generally able\nto help us find these more structurally\nsimple algorithms in both cases you know\neither whether we're way over there or\nwhether we're over here in both cases\nthe larger models help you find simpler\nthings because as I have a larger and\nlarger model I can use you know more and\nmore data and you know find some simple\nthing here or I could you know push it\nway over there and find some you know\nsimple thing over there in either case\nwhat we're doing is we're sort of by\nhaving a really a much larger space of\npossible algorithms to search from we\ncan find you know ones that fit an\nadd-on yet are\num you know are still sort of\nstructurally simple and meaningful way\nquestion\nthere's a little counter-intuitive to me\num in some ways because in like\ntraditional machine learning\num larger models we sometimes think of\nas finding a more complex solution was\nhere you're saying that larger models\nare able to find on the simpler solution\nand I'm wondering\num if there's something fundamentally\ndifferent about neural Nets or\nTransformers or am I making the false\ncomparison between parameters and the\nsense of the neural net and complexity\nor large in the sense of a simpler\nalgorithm yes you're definitely so\ntotally valid and and this sort of\nclassical machine learning you know take\nwould just give you this part of the\ngraph this early one and would not give\nyou the later part and so the later part\nis a little bit you know confusing and\nit's sort of unique to machine learning\num but what's Happening Here is\nfundamentally not that complex so\num what's happening is\nthat um\nwhat we're doing is we're we sort of are\ndoing what's you know with the sort of\nauthor's original you know double\ndescent paper we'll call this is\ninterpolation where the idea is that you\nknow in in sort of the standard you know\nmachine learning sort of in the standard\nlike uh you know classical Paradigm the\nidea is well we just have some you know\nspace and we're just gonna we we're\ngonna you know take the space and we're\njust going to select those algorithms\nwhich in fact do a good job on some data\nand because of the finiteness of our\nspace that gives us some guarantees\nabout how good those algorithms have to\nbe on new data that's sort of not what\nmachine learning is doing right the\nspace is often so big that it's not the\ncase that we're sort of getting our\nguarantees from the finiteness of the\nspace the reason that it does well is\nbecause of this sort of you know basic\ndifference between you know some\nalgorithms being favored to be\nimplemented and some not right so some\nbasins being larger than others some\nthings being you know you know have many\npossible parameterizations to implement\nthem\nand so what you can think about this is\nhappening right is that\num what's really going on is that rather\nthan sort of you know just doing this\nsort of finite you know space thing it\nhas you know a basic you know prior that\ndescribes you know for each individual\ntype of algorithm you know how simple is\nthat algorithm how reasonable is that\nhow structurally useful you know is it\nand what machine learning does but you\nknow what we've able to we've been able\nto find these you know uh architectures\nand these search processes which result\nin being able to select out the you know\nalgorithm which is sort of does a good\njob on the data while also having those\nsort of properties\num that are sort of being selected for\nby the structure right you know that are\nin fact simple algorithms\num and because of that that's where we\nget our guarantees right the the reason\nthat it generalizes is not because we\nsort of forced it to by selecting from\nsome small space you know the the\nalgorithm that actually did had good\nperformance it generalizes because the\nreal world actually has the property\nthat it is simple right and so you can\nthink about this as you know the\nguarantees of machine learning you know\nthe ability to sort of learn algorithms\nwhich actually work in practice in the\nreal world what didn't work if you were\nin a world where real world data wasn't\nsimple right if you were in a world\nwhere like real world data was just like\ntotally randomly selected it would not\nbe the case that any you would see any\nof this right you wouldn't be able to\nyou know machine learning wouldn't work\nin the same way you wouldn't be able to\nfind you know this sort of property\nwhere you can just sort of you know take\nthese sort of things and find the big\nbasement and that actually does a good\njob\nand so you know\num because of this it's not coming from\nthe sort of basic mathematical property\nabout like you know uh you know the sort\nof these finite space properties it's\ninstead coming from the sort of property\nof the world that like you know real\nworld data actually is simple and we're\nable to find you know things that are\nbiased towards simple algorithms and\nthat's why it does a good job so in that\nsense it's sort of it's sort of in a\ndifferent regime\num you know it called this interpolation\nright where it's like we're not trying\nwe're trying to sort of take data points\nand find the sort of simple thing that\ngoes between those data points right\nrather than\num you know trying to just sort of like\nyou know uh fit fit in some sense um\nyeah those are class scores\nyeah\nI believe last year there was a paper\ncalled the chinchilla which showed that\nmodern language models were\nunbetrained in terms of the data they\nuse and that should actually be smaller\nbut they're trying a lot more data is\nthis basically showing that they could\ngo further before they reach that first\noptimal point on this graph\nyes that's a good point so I I won't go\ninto\ntoo much detail on the structure of the\nscaling laws\num\nbut what chinchilla was showing was was\nessentially that you know if you do your\ntraining right because there was some\nsort of problems with you know stuff\nlike the learning rate schedule and the\noriginal scaling loss paper\num you can make it so that the sort of\nwhere you where you find these sort of\noptimum what the slopes are ends up\nbeing different such that what they're\nsort of trying to calculate is the sort\nof compute optimal point so how many\ndata points how large of a model should\nyou pick to find the best performance\ngiven that you have a budget in terms of\nyou know how many you know GPU Cycles\nare you willing to spend\nand that ends up being a little bit\ntricky it's often very dependent on like\nyou know how we do sort of computation\num but one of the things that chinchilla\nshows that is that they're sort of\nthrough these relatively natural scaling\nlaws where you know the ratio of the\namount to which you should increase data\nand the amount to which you should\nincrease the size of the model\num actually when you do it right ends up\nbeing one to one you're sort of uh you\nyou sort of increase them uh you know\ntogether uh in the sort of fixed ratio\nand so\num this is an interesting fact that sort\nof chinchilla has found\num but it's not super relevant here it's\njust like okay if you were sort of to\nthink about how do you what happens when\nyou sort of scale these graphs up and\nyou look at the trends and sort of where\nthings you know where things and end up\nyou know what sort of Trends do you\ngenerally see\num but the trends hide a lot of things\nthey hide a lot of you know okay what's\nactually happening in practice in terms\nof what algorithms we're implementing\nyou know one thing that's kind of fun is\nthat if you look at this at this graph\nyou can see there's this little bump\nright here and at the time when this\ngraph was made I don't think anybody\nreally understood what this pump was but\nI think I can say with relative\nconfidence that we Now understand\nexactly what's happening here\num uh it's learning a particular type of\nalgorithm like the windows and top of\nWheels algorithm called an induction hat\nso there's a paper that finds actually\nwhen you do language training you almost\nalways see a bump around here uh that\ndescribes the formation of a particular\ntype of algorithm so it's pretty\ninteresting\num you know what's what's sort of\nhappening when you sort of you know look\nat the individual details when you zoom\nout you know you can see these sort of\nnice scaling laws that describe these\nsort of General properties about you\nknow uh you know as we change the data\nand the model size how are we changing\nour ability to find simple algorithms\nthat fit the data in general and those\nsort of properties tend to be relatively\nuh continuous and predictable but the\nactual algorithms that it ends up\nimplanting are oftentimes you know can\nvary a lot\ncool okay another uh nice example of\nwhat's happening in this sort of\nparticular case of you know when you\nhave really large models is rocking so\nthis is a case where uh take a\narithmetic task so uh you just sort of\ntrained on on a particular uh you know\nmath mathematical task and\num we have the sort of training loss uh\nuh in in red or this uh so or I guess\nthis is not this is accuracy not lost so\nso higher on this graph is better where\num originally you know after after some\nnumber of steps it learns exactly you\nknow to fit the data perfectly\nbut on the held out data it's learned\nnothing it has no ability to find any\nalgorithm which is Meaningful to solve\nthis larger data because it's\neffectively just memorized something but\nif you keep training for long enough\neventually what you find is that it does\nlearn it the it does find the\nstructurally simple algorithm which is\nable to solve the task\nand so you know what's happening here is\nthis is another case where it's you know\nas we train for longer as we have larger\nmodels we're able to find these sort of\nstructurally simple algorithms and once\nwe've found them they're sort of these\nreally powerful you know basins these\nattractors that uh you know we can stay\nin and can really help us solve problems\nbut they take these you know oftentimes\nreally big models and a lot of search to\nactually find things which successfully\nare able to learn these sort of\nstructurally simple algorithms and so in\nthis case it you know it takes a really\nlong time and a lot of optimization\nbefore it finally clicks and learns the\nactual sort of structural\num you know arithmetic algorithm which\nsolves the task\nand so that's what's happening\neventually here at the end\num but you know it can it can sometimes\nyou know take a lot of search around the\nspace until it finally finds this you\nknow this sort of basin which actually\nhas this sort of structurally simple\nthing Yeah question\nlooking at the nose ER grounds but\nforeign\naccuracy here\nyeah\nif we're looking at the loss instead of\nthe accuracy would look look very\nsimilar just uh you know inverted yeah\nvery similar\nwe're trying to optimize for training\nloss in terms of our gradient descent\nhow is it that the AI doesn't just get\nlike stuck at a point where it never\nlearns anything how is it an eventually\nactually does learn to generalize\ndespite not really having any more gains\nto be made by doing so\nyes that's a great\njust optimizing right for performance\nright it is in some sense sort of\nlooking for things which are simple\nright and so the question is like well\nhow is it looking for things that are\nsimple and it's a little bit complex\nthere's multiple different factors so\none factor is just the fact that we do\ndo regularization right so we actually\nhave regularization is just the process\nof we force it to learn simple things\nright we sort of force it to find\nalgorithms which are you know have small\nvalues for the weights so that they're\nyou know smooth and continuous and so\nwhat that means is it just has to keep\nsearching a lot until it finds something\nwhich actually is you know able to have\nsmall you know simple weights and still\nyou know simple you know parameters uh\nweights are just the parameters in the\nparameterized function which in fact\nresult in good performance and so so\nregularization is an important component\nof this\num but also just you know the stuff like\nBasin volume right where it's like okay\nas it keeps searching around you know\nsimple structurally simpler algorithms\nhave this property that they're sort of\nattractors in the space right that you\nknow once they tend to octave by larger\nvolumes they're sort of once you find\nthem you sort of tend to stay there\nand so you know as we keep searching we\ncan find these sort of you know\nstructurally simple things\num you know because of those sort of\nbasic properties of of the space Yeah\nquestion\nis a structure the simple algorithms\nhave huge basins can you help me\nunderstand why is it true\nyeah yeah so great question so it's a\nlittle bit unclear I think that you know\nuh everything I'll say was just sort of\nyou know some speculation but I think\nthere's a couple of things that we can\npoint out so one thing um that I was\ntalking about a little bit previously is\nthis idea that\num when you have simple algorithms\nthere's more ways to implement a similar\nalgorithm because uh if I think about it\nlet's let's think about it this way so\nlet's say I'm only using half the\nweights because all I have to do is\nimplement this like you know relatively\nsimple thing it only takes a couple of\ncomputations and the rest of them\nbasically don't matter well if they\nbasically don't matter then that means\nany possible value for them is all in\nthe Basin all of the values of those\nremaining parameters all end up in the\nsame base sentence so because of that\num you know it ends up occupying very\nlarge volumes that's one reason right\nyou know the sort of possibilities\nredundant parameters\num also algorithms that have symmetry so\nif I have if there's like two different\nways to implement the algorithm that's\nsort of effectively symmetric with each\nother\num then you know both of those will sort\nof occupy the same Basin for that same\nalgorithm and so algorithms that are\nsort of very symmetric that have the\nproperty that there's multiple sort of\nsymmetries in how you implement them you\nknow something where it's like uh you\nknow if I sum a bunch of numbers I can\neither start from the left side or I can\nstart with the right side you know\nthere's multiple different sort of same\nsimilar ways you know you know symmetric\nways do the same algorithm those will\nend up having you know multiple ways to\nimplement it in the same Basin and so um\nyou know those are some factors right\nyou know that there's there are multiple\nways implanted that can yield these sort\nof simpler algorithms have larger basins\nI mean also other factors like\num just the regularization right you\nknow we we force it to try to find uh\nyou know parameter uh you know\nalgorithms which have the you know low\nparameter values we often do stuff like\nDropout where we actually will take some\nparameters out randomly and force it to\nstill be able to work even if we remove\nparameters and so that forces it to sort\nof have to find things which you know\naren't sort of super brittle algorithms\nand so we\num you know like one of the one of the\nreally important things here is that\nit's not just that we like you know\nHumanity just stumbled upon you know\nsome you know way of doing machine\nlearning you know which just magically\nalways results in like simple things\nthat's sort of true but in important\npart of the picture is also what we\nwanted it to find simple things right\nlike the reason that we do machine\nlearning in the way that we do it is\nbecause there has been uh you know years\nand years of progress on trying to find\nthe particular ways of parameterizing\nfunctions which in fact have the\nproperty that when we search over those\nparameterizations we find simple\nfunctions right because that's the thing\nthat actually gives machine learning\nit's it's power its performance it's\nwhat lets it you know find these\ninteresting functions they're actually\nable to solve complex tasks\num and so that's an important part of\nthe story as well\nintuitively it seems to be like Dropout\nmight lead to a more complex algorithm\nlike I'm imagining I've got a fourth\nstep process to solve a function but it\none of my steps might just stop working\n10 of the time I feel like I would need\na more complex solution to actually get\nto the right answer every time so why\ndoes Dropout actually simplify these\nalgorithm instead of making them more\ncomplex because then for redundancy in\nthe light\nyeah that's a really good question and I\nthink not one I have\nbecause I think the problem is you know\nwe don't actually really well understand\nexactly what it what it does do you know\nthings that we do understand are\noftentimes it does seem to improve\nperformance and you know not just train\nperformance right but test performance\nand so to the extent that we believe\nthat like real world data is actually\ndistributed according to you know some\nsomething like Simplicity that means you\nknow by you know Occam's Razer you know\nit has to be learning something simpler\nbut that's obviously not very satisfying\nyou know we really want to understand\nwell okay why how is learning you know\nsome simpler algorithm\num and it's a little bit unclear right\nso I mean I describe something you know\nin Broad Strokes that's like well you\nknow in some sense you know if it's\nreally brittle you know that's a really\ncomplex algorithm because you know\nsimple algorithms should you know have\nyou know this property that they're sort\nof continuous that they don't depend on\nlike tiny little pieces now you know is\nis that true I mean you know I think one\nof the ways to think about this right is\nthat when I say simple right what I mean\nis not you know the same as what it\nmeans right to humans right simple is\nusing big quotes here because simple is\njust whatever the actual property is of\nthe world that results in you know\nhaving you know algorithms which have\nthat property do well and whatever\nproperty it is of neural networks which\nin fact results in you know uh it being\nyou know uh generally finding algorithms\nwith that property and so you know we\ncan't always sort of interpret it in the\nsame way we would try to interpret sort\nof you know our notion of Simplicity\num and especially in the case of Dropout\nyeah I agree that the thing you're\nsaying makes intuitive sense and I wish\nI had a really nice satisfying answer to\nwhy like okay but actually the notion\nSimplicity doesn't work that way\num but I don't I don't I think we just\nactually don't really understand you\nknow super well exactly how the you know\nthese sorts of you know biases work\nyeah\nI I remember raising a paper from 2016\nby\ngeneralization and in this paper\nit was kind of strange because they\nshowed that\nyou could train a collisional network to\nfit noisy data entirely\nand if you train the modern to classify\nproper data bits with also partially\nnoisy data it first learned to properly\nclassify the real data and then over fit\nto the noisy one which\nseems to imply that\nit nerds first the simple algorithm to\nclassify\nthe data well and then\nhave to rely on overfitting which seems\nto be the opposite of what's going on\nwith working so do you think it's\nbecause of uh\nin the case of croaking modular division\nit's harder to learn than learning the\nDetector by herbs yes this is a really\ngood question so I think that what I\nwould say there is what's happening is\nthat if this is this graph here this\nthing that we call Double descent where\nyou know when you're when you're early\non right when you're in this initial\nregime then what you're doing is you\nknow you're you're sort of you know as\nyou keep making it larger as you keep\ntraining you end up in a worse spot\nbecause you sort of do this overfitting\nwhere you end up uh you know learning\nyou know just sort of memorize a bunch\nof data points but if you kept going you\nprobably would eventually see that you\nknow the second part with even larger\nyou know more training you would start\nto do better again\num groking is just sort of a very\nextreme example of that if you look\nclosely you can actually see that there\nis a double descent right where you\nactually you know you you do go up and\nthen go back down again and then\neventually you just have this very\nextreme thing at the end where you find\nthe sort of exact simple algorithm\num what the shape of that sort of double\ndescent curve is and how it looks will\nvary in the particular setting but yeah\nI mean I think that basically that's\nthat's a phenomenon that's going on\nthere\nokay uh so we'll talk a little bit about\nwhat this looks like in practice so uh\nyou know we've talked a bunch about the\nsort of relatively simple classification\nproblems cases where you just have you\nknow shapes in color or you know where\nyou're just sort of trying to solve some\nrelatively straightforward\nclassification problem like that\num in practice that's not always how we\nuse these sorts of systems so you know\none example of a thing that we will\noften do is reinforcement learning\num I think you should not think of\nreinforcement learning as really\nfundamentally different than anything\nwe've talked about so far\num structurally it's doing something\nvery similar so what we are doing is\nrather than searching for the function\nwhich sort of has the best performance\non the data we're searching for the\nfunction which when it acts in some\nenvironment results in the best\nperformance in that environment so we're\nsort of instead of searching for the\nsort of you know simplest function which\ndoes a good job at um you know doing you\nknow classification of images we're\nsearching for the simplest function that\nwhen you give it a go board and it you\nknow produces a go move if you iterate\nthat many times results in good\nperformance at the game of Go and so\nwe're still fundamentally doing the same\nthing where we're just searching over a\nlarge space of parameters trying to find\nthose set of parameters which in fact\nresult in good performance on some data\num but now we sort of have this\ninteractive setting where the good what\ngood performance means means does it\nhave the ability to actually play and\nwin over you know many series of\ninteracting with an environments and we\nuse various algorithms for being able to\nyou know search over that space in a\nsort of meaningful and effective way but\nit's still essentially doing the same\nthing we're still searching over this\nlarge space of possible parametizations\ntrying to find some algorithm with which\nhas some particular property on some\ndata\nokay and so you know we this is this is\noften you know yields great results in\npractice uh you know like stuff like\nalphago\num uh but but structurally it's doing\nsomething very similar uh where you know\nwe're still trying to find the symbolist\nalgorithm uh it's just you know now\nwe're sort of trying to find the\nsimplest policy the simplest sort of uh\nyou know way to um act in an environment\nokay\nuh now you know probably I think you\nknow if you're familiar with you know a\nlot of the big advances of machine\nlearning you'll be familiar with stuff\nlike alphaco it'll probably be even more\nfamiliar with uh language models large\nlanguage models you know that probably\nare the most you know well known and\nmaybe probably the most powerful uh you\nknow the existing sorts of models that\nexist today uh and they're really good I\nthink that if you have never interacted\nwith a large language model before or\nhaven't really interacted with one very\nmuch I think there's really no\nsubstitute to just doing it yourself I\nthink that uh the best way to sort of\nget a handle on what it is that these\nthings are and like effectively how they\nwork and you know what they're capable\nof is you know find you know one online\nthat you can interact with you know chat\ngbt whatever and just sort of talk to it\nask it questions I think this is a\nreally valuable experience for anyone\nwho's interested in trying to understand\nAI uh and how it looks right now to do\num but but long story short is they can\ndo a lot of really impressive stuff\nright so what we did is you know we just\nyou know found the simplest algorithm\nwhich fits the entire internet uh you\nknow effectively and it turns out the\nsimplest algorithm that fits the entire\ninternet uh you know can reason it can\nwrite stories it can talk you know in a\nhuman sensical way it could do a lot of\nreally crazy stuff\nokay\num but fundamentally you know what what\nit is that it's doing you know is um\nit's hard to understand because now we\nhave a data set that is so large and so\ncomplex that you know the simplest\nalgorithm is able to fit a data set that\ncomplex and that large well it need not\nbe that simple of an algorithm anymore\nright the out the you know once we put\nthat constraint that it has to be the\nsimplest algorithm that fits the whole\ninternet that's a strong constraint and\nit means that the resulting simplest\nalgorithm might still be very complex uh\nand very complicated and so you know\nunfortunately our understanding of what\nmechanistically large language models do\nyou know what happens when you take the\nsimplest algorithm that fits the whole\ninternet is uh is quite poor\nand so we'll talk uh a little bit later\non in lecture series after a bunch of a\nbunch of lectures we'll return to this\nquestion we'll talk a little bit more\nabout what it is that large language\nmodels might be doing mechanistically\nbut even then all I can promise is\nspeculation so we're going to speculate\na little bit later on about what it is\nthat these things might be doing but um\nit's complicated and you know our\ncurrent our understanding of exactly\nwhat mechanistically they are doing is\nis not super great you know uh what we\nhave is just well there whatever the\nsimplest thing is you know that fits\nfits the internet uh you know whatever\nthat is you know and even then it's like\nwell you know exactly what the inductive\nminuses you know inductive biases are\njust a name for uh whatever it is that\ndetermines the sort of metric of\nSimplicity in in neural networks and in\nmachine learning um whatever those\ninductive biases are whatever that\nmetric of Simplicity is you know uh even\nthen we don't fully understand it right\nyou know it's not exactly the same as\nhuman Simplicity it's some notion of\nSimplicity that does in fact a good job\nand whatever that notion of Simplicity\nis you know there's some algorithm which\ndoes a really really good job on that\nthat we find when we train these sort of\nmassive language models and in practice\nwhat we see is that whatever that is it\ndoes a really good job in practice but\nwhat it actually is doing is is quite\ndifficult to understand Yeah question\nwhat do you mean when you say training\nan algorithm that fits the entire\ninternet uh can you cover the board on\nthe data set that's when we compose we\nretrain these language models and how is\nthat hey I'm literally training though\nthe entire internet\nso it's basically true right uh you know\nit's there's there's a couple of you\nknow kawaiyats here uh in terms of you\nknow exactly how scraping is done\nexactly how you know data set creation\nuh happens but basically the sort of\nstandard state of the art currently for\ntraining very large language models is\nmake a massive scrape of as much of the\ninternet as you can find you know trying\nto do some amounts of filtering and you\nknow extract out the sort of you know uh\nyou know data points which are actually\ngoing to be most useful and relevant uh\nand then just train on them you know\nfind some simple algorithm which is able\nto fit all of you know internet attacks\nwhich is able to predict what the next\nword will be in any situation that it\nfinds itself on in the internet\num\nwhat\nthe Western internet no it's not just\nthe Western internet uh in many cases\nyou know scrapes will exist of you know\nbasically anything that is you know\npublicly available\num it really will depend on exactly what\nyou know case you know sometimes there\nwill be cases where people will filter\nto just English text\num that's not uncommon\num but it's not always what's done so\nyou know it it just sort of varies on\nexactly how you want to train your model\nand what you want it to be able to do um\nif you're mostly interested in just\nbeing able to interact in English then\nyou know it's going to be you know\nwasteful to sort of train on a bunch of\nthings that are not English and so\noftentimes you can try to you know just\nfilter down English but that's not\nalways done\num so you know it really depends but\nessentially it is you know take a really\nlarge scrape of you know a bunch of\ntexts from the internet that is as high\nquality as you can make it while still\nhaving a huge amount and then just spend\nmillions of dollars training you know\nsome really really large parameterized\nfunction that in fact results in good\nperformance on that data set\nso the answer this is going to have to\nbe just speculative but\ngiven what we've learned to before about\ndouble percent and how eventually the\nmodel reaches a point where the simplest\nfunction it can understand that fits the\ndog the best is just memorize everything\nit seems that current language models\nhave not reached that point do you think\nit's possible that we could end up with\nlike gbt4 or gbt5 or something that\nactually does just memorize the entire\ninternet and then gbt6 would actually be\nworse than gpd5 if they continue to\nexpand it\num so that's only possible if the sort\nof people building these models sort of\nyou know made a mistake because\num everyone knows you know the graphs\nthat I was presenting right and so we we\nsort of have actually pretty good\nability to be able to predict and\nunderstand where the various basins are\ngoing to end up on those graphs and so\nyou can extrapolate and pick exactly how\nmany tokens to train on how large of a\nmodel to use uh for the purpose of\nending up in a spot where you know\nyou're you're sort of scaling predicts\nit's going to have good performance\num so that's what people do in practice\nand so\num you know if you sort of just did the\nnaive thing of just like take the exact\nsame model and make it bigger and bigger\nand bigger and bigger uh well like\nchanging no other properties of it then\nyes you would you would encounter that\nbut that's not what people do right they\nactually change the amount of data and\nanother factor is as you sort of make\nthe model larger to you know prevent\nthat from happening and so um yeah if\nyou were sort of doing something naively\nyou would see this but because people\nare not doing it naively you know you\nyou shouldn't see that\nso basically you'll end up with this\npoint where\nthe model might need to either need more\ndata or a lot more compute to get better\nperformance but we would know that\ninstead of just spending millions of\ndollars on training a model I'm like\nwhoopsie\nyeah I mean so it is possible that like\nsome of these sorts of you know scaling\nlaws could break you know things could\nhappen differently than we predict but\num you know we do at least understand\nthis phenomenon enough that you know\nit's unlikely that it would just be you\nknow a situation where you know we just\ntrained it into accidentally into the\nsort of wrong regime\num there may be other cases where\nperformance decreases for other reasons\nso there are examples of what's called\ninverse scaling where as you sort of\nscale up your language model you know\nvarious different metrics that are of\nlike desirable properties can can go\ndown in certain cases oftentimes you\nknow one example of this is cases where\num there's sort of a you know misleading\nanswer to a question where the model\nwill sort of be you know larger models\nare often better at being able to sort\nof convince you know people of these\nmisleading answers and so you know it\ncan sort of you know do worse on a like\nyou know how good is the information\nquality as the the model gets larger\nsometimes but that's still it's not a\ncase of like you know oh you know we\njust sort of like you know totally\nscrewed up and like trained the wrong\nyou know size model it's a case of okay\nthere's some sort of more complex thing\nhappening in what it means for a simple\nfunction to fit this data right there's\nsomething going on in how the sort of\nfunction operates that causes it to sort\nof have this particular Behavior change\nas we get sort of simpler you know\nfunctions that are better fitting\nwithout\ncool okay so we're going to sort of\nshift gears now uh for the last part of\nthis talk so uh you know we've talked a\nbunch about machine learning and we've\ntalked about you know this basic process\nof you know taking these large data sets\nyou know Finding simple functions which\ndo a good job on the data set\num and you know the sort of fundamental\nproblem that we've encountered is that\nwe often can't know exactly what it is\nthat the sort of simplest function that\nfits a data set is doing because we\ndon't fully understand what this notion\nof Simplicity is how it operates and\nsort of what it corresponds to in terms\nof what the actual algorithms are that\nthese models end up implanting and so I\nwant to sort of take a step back and ask\na sort of basic question which is what\nis the worst case scenario right what is\nthe worst possible thing that it could\nbe that these models are doing\num that we would be concerned about\num given that you know we don't know\nright you know if we are in a situation\nwhere we're building really powerful you\nknow systems and we don't understand you\nknow structurally what those systems are\ndoing\num I think it's you know prudent to take\na second and sort of try to understand\nyou know okay how could things go wrong\nwhat are the worst possible scenarios uh\nfor what these models could be doing\nokay so that's that's that's part two of\nthe first talk so uh what we're gonna be\ntalking about here is this notion of\ninstrumental convergence\num so I'm going to borrow an example\nfrom Stuart Russell who's uh professor\nof AI at UC Berkeley uh and likes to\ntalk about this sort of thing a lot\nwhere he sort of uses uh a sort of\nanalogy to illustrate the sort of why we\nmight generally be concerned about the\nsituation where we have very powerful AI\nsystems in the world\num and situations where they start to\nget more powerful than humans\nuh he sort of calls this the gorilla\nproblem\nso the question that we sort of that he\nlikes to ask here is well\num you know what is the second most\nintelligent species on the planet\num you know I mean pick your favorite\ngreat ape I'm not gonna claim it's\ndefinitely the gorilla uh I'm not an\nexpert but uh whatever we'll suppose\nthat it's you know it's the gorilla you\nknow substitute chimpanzee or whatever\nother you know animal you prefer it's\nprobably something like that is the most\nthe second most intelligent species on\nthe planet after after humans\num and so you know what what is their\nfate right you know what determines the\nability of the gorilla you know as a\nspecies and of individual gorillas for\nthem to be able to you know Thrive and\nlive good lives and and do the things\nthat they want to do\num and the answer is while it's mostly\nnot about the gorilla he mostly has very\nlittle to do with what the gorillas do\nand almost everything to do with what we\ndo right um you know the ability of\ngorillas to continue to have a habitat\nto continue to exist and not be\nendangered to you know Thrive to be able\nto you know find food and resources and\nlive their lives is dependent on our\nability to let them do that you know\nit's dependent on things like\nenvironmental measures on things like\nglobal warming on things that like\nenvironmental policy that the gorillas\nprobably don't even understand or\ncomprehend or could comprehend\num but that's what determines their fate\nright the thing that is going to\neventually determine the fate of the\ngorillas is the you know way that we uh\nuh you know treat them in the general\npolicies that we build\nhas nothing to do with the gorillas and\nso this is a sort of you know uh maybe\nstriking example of a case where um you\nknow it's not great to be the second\nmost intelligent species on a planet\num you know even though you know maybe\nwe you know people have tried to\ncommunicate with the gorillas it's not\nlike they're you know totally you know\nincapable of communication they clearly\nhave the ability to you know have you\nknow meaningful preferences in some\ncases\num but their ability to act on those and\nactually you know you know is is highly\nconstrained by whatever you know however\nwe treat them\nand so if we are in the business of\nBuilding Systems that we expect you know\nmaybe we'll eventually be uh more you\nknow more intelligent than us\num you know we should be concerned that\nwe may end up in a similar situation\nwith the roles reversed where you know\nif we are building uh you know systems\nwhich are more intelligent than us that\nyou know our fate May lie more in the\nhands of those systems that than it does\nin in ourselves\nand that's a concern right you know if\nwe're talking about you know originally\nwhat is the sort of whole point to this\nbut we want to avoid egg Central risk we\nwant to avoid situations where humanity\nis disempowered where we no longer sort\nof are able to control our own destiny\nand so if we are in a situation where um\nyou know that control is rested away\nfrom us in the same way it sort of rests\naway from the gorillas\num we're concerned\nokay so you know this doesn't say\nnecessarily that we should be concerned\nyou know maybe you know AIS will treat\nus well or maybe you know uh you know we\ndon't know right we don't know what\nthey're doing but we want to understand\nthe worst case scenario so um we want to\nunderstand\num you know okay if Our Fate is in the\nhands of these sort of very powerful AI\nsystems\num you know we don't know what they're\ndoing but what things could they be\ndoing that would be bad for us\nso I have here a Minecraft map\nand I sort of want to consider a thought\nexperiment\nso let's say we are at spawn we're at\nthe center of this Minecraft map and we\nhave some objective you know maybe we\nwant to build a house in you know the\ntop left quadrant we want to you know go\nthousands of blocks away and build a\nhouse\nwhat do we do right you know what's the\nbest strategy for us in building that\nhouse well you know what we're going to\nhave to do is you know we're gonna have\nto you know build a boat you know chop\ndown some trees you know use those to\nyou know build Transportation we're\ngoing to have to uh you know gather\nresources start you know mining you know\ngold and diamonds\num okay those are some of the things\nright that we would have to do if we\nwere to try to you know build a house on\nthe top okay but let's say instead of\ntrying to build a house what we wanted\nto do instead was you know blow up with\nTNT the entire you know you know Chunk\nin the bottom okay that that's a totally\ndifferent goal right it involves\ncompletely different you know objective\nuh but what's the best way to accomplish\nit well we're still going to need to be\nable to get ourselves down there where\nwe want to be able to blow things up so\nwe're still going to have to be able to\nyou know build a boat gather the\nresources to be able to transport\nourselves and we're gonna have to get\nall the TNT from somewhere so we're\nstill gonna have to mine things out and\nyou know kill a bunch of creepers and\ngather a bunch of resources so we can\nactually build the TNT that we need to\nbe able to blow the thing out so it\nlooks pretty similar right you know we\nhave these very disparate goals you know\nin totally different places where we\nwant to do very different things\num but in this sort of setting they they\nuh they require inputs resource inputs\num which are very similar they require\nsort of you know same basic resources to\naccomplish these very disparate uh uh\nobjectives\num and so there's sort of a basic\nstructural property of the world at play\nhere\num and an important uh yeah so important\nthing to point out is that this sort of\nstructural property here that we're\ntalking about it's not a property of you\nknow the goals per se it's a property of\nthe world it's a property of the fact\nthat in in Minecraft there are resources\nuh you know you know items you know\nblocks you know things that you need to\nbe able to do almost anything in the\ngame\nand because of that property of the\nworld almost anything that one might\nwant to do in Minecraft requires these\nsame resources and what that means if\nyou have multiple agents in the world\nthat might want to accomplish the same\ngoals or or might want to accomplish\ncompletely different goals they're still\ngoing to have competition for the same\nbasic resources\nokay so I'm going to borrow another\nexample from Stuart Russell\nwho likes to call this problem the the\nsort of problem of you can't fetch the\ncoffee if you're dead so the idea is you\nknow you have you know some model you\nknow some agent some system and it has\nsome some goal whatever it is you know\nin this case it wants to fetch coffee\nand the problem is well Fetch and coffee\nhas you know basic resources that you\nneed one of which is your own Survival\nyou know you need to not die right and\nso in the Minecraft example you know in\nall of the cases I'm going to probably\nneed to get some gear you know to\nprotect me right against enemies in this\ncase you know I I need to make sure I\ndon't die if I'm going to fetch coffee\num and so there's basic structural\nresources basic things that are\nnecessary to do almost anything anything\nthat you might want in the world\nrequires the same sort of basic inputs\nin many cases\num and of course you know this is not\njust true of Minecraft it's true of the\nreal world as well so you know this\nthere there are basic resources in that\nexist in the world you know uh materials\nuh you know matter energy you know neck\nentropy you know uh you know atoms of\nvarious different varieties elements you\nknow all of these things you know that\nexist in the world and one needs to do\nalmost anything right anything you want\nto build anything you want to accomplish\nthat happens in the world requires the\nsame sort of basic resource inputs\nalmost regardless of what it is\num and uh this sort of structural\nproperty of the world uh you know the\nsort of basic property of the world that\nyou know different very disparate goals\ncan require the same resources is is the\nproperty we call instrumental\nconvergence where\num the same sort of you know all of\nthese very different goals can Converge\non the same instrumental goals Where You\nKnow instrumental here is just a thing\nwhich is useful for accomplishing\nanother thing right and so the things\nwhich are useful for accomplishing other\nthings tend to be very convergent across\nsort of many different possible things\none can do in the world\none thing I would point out is that you\nknow I did sort of make a hidden\nassumption here which is like well why\nwould your AI even want to accomplish\nanything in the world at all\num and that's an interesting question\nit's one that we'll sort of going to be\ntalking a little bit more about in the\nnext two talks\num but uh the thing I will say right now\nis well okay\num you know we don't know you know it\nmight it might not be um it's very\nunclear you know whether whether we will\ntrain systems that are trying to\naccomplish things in the world but the\npoint is that if they are trying to\naccomplish things in the world and we\ndon't fundamentally understand why they\nwould or would not be\num almost regardless of what it is that\nthey might be trying to accomplish we're\nwe're concerned because humans want\nthose resources too to be able to do the\nsorts of things that we desire and if we\nare in competition with AI systems for\nthose resources it could be um quite\ntricky\nokay uh question\nhere you go\nwhat is my goal as Journey just to make\ncoffee within five minutes sure they it\nwould be easier to just make the culture\nthan to fight the people over the world\nand steerable cripples resources\nyeah so I think that's a good point I'm\ndefinitely not claiming that like any\npossible goal that you could have would\nalways yield your model you know trying\nto take over the world\num I think that there's a couple of\nthings that we can say here so one is\nlike well okay we're trying to think\nabout the worst key scenario and we're\ntrying to think about okay how many\ngoals are there that result in bad\nthings and even in that case you know\nthere's still some resource competition\nyou know there's some resources that you\nneed you need to like exist for enough\ntime to get the coffee for five minutes\nyou need to be able to do enough things\nin the world to accomplish that that\ngoal\nand if you're really you know gung-ho\nabout that goal then you know you also\nhave to do other things like you know\nyou know protect yourself you know build\nyou know as many defenses at you know\nagainst possible ways which you could\nfail your five-minute task make sure\nthat you know you're as likely to\ncomplete that task as possible and those\nthings could could yield increasing sort\nof you know resource consumption\nbut the general point is not that\num anything that your AI system could\npossibly be doing is going to result in\nit wanting to you know kill all humans I\nthink that's that's not true I think the\npoint though is that\num we don't know what it is that they\nare going to be doing structurally you\nknow all we know is something like okay\nit's going to be something like the\nsimplest algorithm which you know fits\nsome massive data we don't exactly\nunderstand what that algorithm might\nlook like or what it might be trying to\ndo in in you know in whatever sense\ntrying means\num and it's conceivably you know\npossible that it could be trying to do a\nvery large variety of things many of\nwhich if the things that it is trying to\ndo involve accomplishing something in\nthe world results in uh you know\noutcomes that that we might not like\nright result in outcomes where they are\ncompeting for resources with humans\nresources that we would we would like to\nuse um you know to make our lives better\num instead\nand so you know there are many cases\nwhere that resource competition is maybe\nokay right so maybe we want you know if\nwe want if our AIS are competing\nresources on our behalf they are trying\nto gather resources for us in the way\nthat we would want them to be doing so\nthen maybe we're okay right you know but\nthere's a lot of possible things they\nmight be trying to accomplish almost\nanything you know any sort of randomly\nselected thing in the world you might be\ntrying to do\num where um you know if that's not you\nknow exactly you know the sorts of\nthings that we want\num you know that humans might end up in\nnot so good position\nand I think that I'll point out about\nthis is if you're interested in this\nbasic phenomenon of instrumental\nconvergence\num it's something that you show that\nshows up not just in you know Minecraft\nnot just in the real world it's a very\nbasic structural property of almost any\nuh environment so even a randomly chosen\nenvironment where you just randomly\nchoose what the states are in that\nenvironment and what the transitions are\nbetween different states has the\nproperty that optimal policies in that\nenvironment the sort of best way of\nsolving that environment for random\ngoals random desired States will result\nin a property where the optimal policies\nwill tend to seek power where states\nthat are powerful in the sense that if\nyou go to that state you have the\nability to sort of unlock a large\nvariety of different new states that you\ncould go to from that point will be sort\nof convergent for almost any way of sort\nof uh you know accomplishing any random\ngoal in a sort of most optimal way\nuh and so this is you know uh we can\nsort of prove and you know study this\nmathematically and so if you're\ninterested you can you can take a look\nat the often policies tendency power\npaper\num but the basic you know sort of\nstructure here is sort of very\nfundamental to almost any of these sorts\nof environments it's just the case that\nyou know in any case in any situation\nwhere there are sort of you know General\nproperties States you know resources\nwhich are useful uh for like doing\nthings then those resources are going to\nbe in competition and they're going to\nbe you know useful for a wide variety of\ndifferent possible uh you know things\nthat one might want to accomplish\nand that's the core concern here\nfurther being research a couple years\nago around satisfy service rather than\nmaximizers where they're just trying to\nget like a specific threshold maybe\ncapping their instrumental weakened\nburden goals\num I'm also kind of drawing up vanilla\nhere to wanting to create a system that\neventually is Happy turning left rather\nthan going right and kind of staying\nthere\num is there also kind of an element here\nof the tragedy of the commons problem or\neven if we knew how to correctly and\nsuccessfully do that\nsystems that go right or simpler in some\nsense drawing back to 10 of this\nparallel machine learning producing\nsilver systems and so there's this\nreally heavy pressure impetus towards at\nleast some of the systems always\nperpetually dealing right and their core\nkind of failing a tragedy that comes\nfrom\nyeah okay a bunch of questions there so\nI'll start with the uh satisfizers so I\nthink satisfy is a really interesting\nidea for people who are not familiar the\nbasic idea is well what if you try to\naccomplish a thing right and then stop\nsort of similar to The Proposal uh you\nknow that was that was talked about\nearlier about you know what if you you\nknow get the coffee in five minutes and\nthen you're done right you sort of have\nsome level at which you want to achieve\nyour goal and then you stop\num it's a really interesting idea I\nthink the main you know most fundamental\nproblem is you know what if that's not\nthe simplest way right to fit the data\nright we're sort of in this setting\nwhere we don't get to pick exactly what\nthe structure is of the algorithm that\nwe find when we do machine learning\nright we get some control over it right\nwe get a lot of control over the data we\nget a lot of control over the\narchitecture but how the architecture\nand the data combine to find the sort of\nsimplest algorithm that fits that data\nis is a is a complex process that we\ndon't get a lot of control over right it\ndepends on these basic structural\nproperties about what algorithms are\nsimple\nand so it may be the case that we can\nfind mechanisms for consistently\nproducing satisficers rather than\noptimizers and it may be the case that\nwe can't I think currently I don't know\nof a way that you could consistently\ntrain models and believe that they're\ndefinitely going to be satisfizers I\nthink that's a really tricky thing to be\nable to do you can you can certainly do\nthings where you sort of train models on\nfixed reward schedules where they you\nknow they can't ever you know get more\nthan some level of reward but even in\nthat case you cannot end up in\nsituations where your model wants to\nmaximize the probability of it achieving\nthat level of reward right where it\nwants to you know maximize the\nprobability that it's able to you know\nget at least that much of whatever you\nknow it is that's trying to accomplish\nand so you know it can get very tricky\nin terms of you know actually getting a\nthing which is well described as\nsatisficer and even then if I'm training\non a reward which is just like you know\nwhatever the loss that I'm trading it on\nI don't have a guarantee that my model\nin some sense cares about the thing I'm\ntraining on all I know is that it is\nsome simple algorithm that fits the data\nwell and that could fundamentally be\ndoing doing almost anything\nand so I think satisfy is an interesting\nidea but but it's how to actually get\nthem to work in practice is is is\nunclear\num\nand then let's see uh\ngoing on the other questions about the\ntragedy of the commons you know what\nthis might look like sort of uh you know\nin terms of\num\nyou know in practice sort of how this\nplays out I think that\nit's really sort of unclear I certainly\nthink that you can sort of think about\nwhat's happening here as a sort of\ntragedy of the common scenario where\nit's like okay you have a bunch of\nresource competition it would be really\nnice if we had some ability to you know\nbe able to sort of you know manage those\nresources in a way that wasn't just you\nknow whatever the most powerful you know\nintelligent species is is able to just\nyou know control over them\num but of course you know we we don't\nhave you know some mechanism that we can\nuse to just sort of constrain any you\nknow possible agents you know uh you\nknow certainly you know we have\nmechanisms that exist in the world today\nthat we can use to constrain humans uh\nyou know we have things like laws and\ngovernmental institutions you know that\nwe use to create a society that\nconstrains humans in particular ways but\nwe don't necessarily have the ability to\nconstrain some arbitrary intelligence\nright you can imagine you know certainly\nthere are rule you know there are sort\nof you know in some sense rules you know\nmaybe among gorillas right you know\nwhere there are things that you know\nthey'll you know they punish you know\ndefectors against their you know you\nknow what what you know if you try to do\nsomething in the you know gorilla\nSociety you know you uh you know maybe\nthey'll you know cast someone out of the\ngroup or whatever but um you know those\nthose have almost no bearing on us right\nyou know we don't care you know that\nthere are you know particular you know\nways that gorillas punish other gorillas\nyou know that has no bearing on us\nbecause we don't have to care you know\nthe we we are so much more powerful and\nintelligent than the gorillas that\num you know it doesn't matter\num whatever structures they have and so\nit's possible we could build systems in\nways that are able to constrain you know\na society of you know humans and AIS you\nknow having very powerful intelligent\nyou know AIS in the world um but It's\nTricky right you know we can't just do\nit via the same you know mechanisms that\nwe use to constrain humans because you\nknow those mechanisms are limited by you\nknow humans actually being willing you\nknow to cooperate and uh for us to be\nable to have the ability to you know\nmake them do what we want when they\nrefuse to cooperate and it may be very\ndifficult to do that if you have a you\nknow system that is substantially\nsmarter than you that can you know\noutfit you in any situation\nand um so that's potentially a concern\nokay so if there's any last questions at\nthe end we can sort of I can take a\ncouple of last questions but that is uh\nit for lecture one uh we're gonna you\nknow there are still a bunch more to\ntalk about and we'll get to those uh\nlater but that is the end of the first\nuh the first part\n[Music]\n[Applause]\nokay final questions\nso uh you mentioned at the very\nbeginning of this talk that a lot of the\nassumptions would be made in was that\nprosaic AGI\nbut the Mosaic systems could lead to AGI\nHow likely is this actually to be the\ncase\nyeah that's a great question so I think\nit's very unclear uh you know predicting\nthe future is always you know a very\ntricky thing to do and you know in many\ncases you know that's sort of the\nbusiness that that we're in right now\nwe're talking about this sort of thing\nthere's a couple of things that I'll say\nso the first thing I'll say is\num\nyou know\nin the case of like you know prosaic AGI\nI think that\num our sort of best guess should be it's\nprobably going to be something like what\nwe have right now right you know because\nit's so difficult to try to predict and\nunderstand exactly you know what future\ndevelopments might exist\num by default we should guess well okay\nyou know it's going to it's not going to\nbe the same as how things are work\ncurrently but our best guess should be\nhow things work currently because we\ndon't we can't predict exactly in which\ndirection is going to vary from where we\nare right now\num and so I think that you know trying\nto start from if it was like you know\ncurrent systems how would we be able to\nunderstand uh you know what to do in\nthat situation\num is a really good starting point and\nas you know new Advanced is common we\nsort of find new ways in which you know\nuh you know AI is developed we can\nchange you know that understanding just\nto suit those new directions but you\nknow by default we should expect those\nnew directions are probably going to be\nsomething similar to what exists right\nnow and so understanding what exists\nright now is sort of a key\num you know uh input into being able to\nunderstand things in the future\nand then the other thing I'll say is\nthat you know it is very difficult right\nto predict exactly when things are going\nto happen but you know I think that\nright you know because because all of\nthis is you know a prediction but I\nthink that in many cases there are some\nthings that you can predict right uh\npredicting exactly when something is\ngoing to occur is very difficult\npredicting that something will occur is\nmuch easier right so we can understand\nokay AI continues to keep getting better\nand continues to keep getting more\nintelligent\num as long as that Trend continues at\nsome point it will surpass humans at\nsome point it will become the most\nintelligent thing and we'll sort of have\nto deal with the consequences of that\nexactly when that will happen you know\nwhere do those graphs meet you know how\nwhat is the slope how do they you know\nchange that is very tricky to predict\nbut the fact that it will happen at some\npoint you know barring some other\nessential catastrophe that wipes out\nhumans seems quite clear you know\nthere's no fundamental barrier to you\nknow us can you know being able to build\nsystems that are smarter than us and\neven you know a very large number of\nsystems that were the same intelligence\nlevel as humans would also be you know\nequivalently existentially dangerous and\nso um\nyou know at some point it has to happen\nand you know exactly when I don't know\nand I'm not going to try to speculate a\nbunch on on when because I think it's\njust such a tricky thing to do uh and I\nthink this sort of involves you know a\nlot of you know you know analysis and\nalso just a lot of you know opinions\num and it's very difficult and so you\nknow I I think that speculating on when\nis hard but I think that we can talk\nabout these sort of General properties\nand we can try to sort of you know\napproach the uh task of being dealt you\nknow with this problem of how do we deal\nwith you know the problem you know the\nproblem that in the future these very\npowerful systems might exist it might be\ndangerous you know in as effective a way\nas we can which is well we we have to do\nsomething because you know the problem\nis concerning and we want to try to\naddress it\num and so you know we're forced to try\nto make whatever predictions we can that\nare as good as possible\nand you know so so we can try to you\nknow build upon what we have right now\ntry to understand how things work and\nyou know basically chart a course you\nknow for for being able to understand\nfuture things\num but it's hard and so you know one of\nthe things we'll talk about sort of\nlater on in some of the other lectures\nis telling multiple stories right where\nlike we don't know exactly how things\nare going to play out and so we can\nimagine you know multiple different ways\nin which you know sort of things go so\nfor example you know one thing that\nwe're we talked about a bunch here is\nwe're really\num you know uncertain about exactly what\nthe inductive biases are right exactly\nwhat the measure of Simplicity is that\nmodels use but we can sort of Imagine\nwell what if we think about multiple\ndifferent possible ways that it could be\nand then under different scenarios how\nwould things play out you know\num and so we'll sort of use that as a\nmechanism later on for trying to\nunderstand particular scenarios because\nthat's a way to sort of deal with our\nuncertainty you know we don't exactly\nunderstand what it is that is true but\nif we think about multiple different\npossible options we can see what's\nconvergent between those options and\nthat sort of can tell us what things are\nreally you know likely in practice\nyou mentioned one of the things you\nencourage people to do is to pick up a\nlarge language bottle at Chachi Kadin\nand play with it directly just to get a\nsense that it's capability that's\noutfits it seems like you're at the talk\nthe actual process of understanding\nbasins and it will kind of pathway\nthrough basins this ticket is also very\nimportant to build it into a shop how do\nyou recommend people build that kind of\nintuition through maybe toy models or\nfor more kind of theoretical\nunderstanding of what's going on like\nwhat's the best way to build an\nintuition of that\nyeah that's a really good question I\nthink part of the problem is that it's\njust not super well understood what\nexactly it is that\num\nthat that's sort of happening with those\nwith those basins\num\nand so I think that\num you know maybe the best way to sort\nof get an understanding there is just\num you know looking at what some of the\nmodels are that actually exist and how\nthey work and what they do\num and sort of trying to understand what\nis this what is this metric of\nSimplicity right it's determining you\nknow what these models are and how\nthey're functioning certainly playing\naround with different models and just\nunderstanding in a lot of different\ncases when we train on various different\nscenarios you know what things does that\nresult in can sort of help\num but yeah it's just really tricky\nokay uh if that's it we can we can call\nit there and uh we'll have another\nanother lecture uh hopefully uh coming\nup pretty soon\nI can't do tomorrow I don't think we'll\ntalk about it", "date_published": "2023-05-13T15:56:21Z", "authors": ["Evan Hubinger"], "summaries": []} +{"id": "7923558ee6aec9a95c62602c9737dd3e", "title": "Concrete Open Problems in Mechanistic Interpretability: Neel Nanda at SERI MATS", "url": "https://www.youtube.com/watch?v=FnNTbqSG8w4", "source": "ai_safety_talks", "source_type": "youtube", "text": "kill\nyeah that'd be too\num all right so for the people I've not\nmet before\num hi I'm Neil I used to work at\nanthropic doing mechanistic\ninterpretability of Transformers I have\nspent the last couple of months as an\nindependent researcher doing some work\non grocking and more recently getting\nextremely nud sniped by various kinds of\nfield building and infrastructure\ncreation for mechanistic\ninterruptability and in about a month\nI'm going to be starting on the deepmind\nmechanistic interpretability team\nand this presentation is a essentially\nan ad for a sequence of posts I've\nwritten called 200 Corporate open\nproblems and mechanistic\ninterpretability\nthe structure of this presentation is\ngoing to be a whirlwind tour through a\nbunch of different areas of mechanistic\ninterpretability where there are things\nI'm confused about but I think I could\nbe less confused about and\ntrying to give an example of work that I\nthink is cool in that area and\nrecommendations for where people who are\nwatching this if I want to contribute\ncould get started\nand I'll get more into resources at the\nend but if you want to follow them at\nhome you can get to the slides with this\nlink\nuh you can see the actual sequence of\nposts with this link and I wrote a post\ncalled concrete steps to get started in\nmechanistic interpretability that tries\nto give a concrete-ish guide for how to\nget up to speed yep\nand I recently discovered how I can use\nmy website to create link redirects and\nuse it ever\nuh it's so satisfying\nso\nto begin I want to briefly try to\noutline what is mechanistic\ninterpretability\nuh so at a very high level the goal of\nmechanistic interpretability is to\nreverse engineer neural networks that is\ntake a trained Network\nwhich is capable of doing well on a task\nand try to understand how it does well\non that task\ntrying to use whatever techniques work\nto try to reverse engineer what\nalgorithm it has learned and what's the\nunderlying population of the agent is oh\nthe model is not necessarily an agent\nand one particular exciting reason you\nmight care about this is there are often\nmultiple possible solutions that a model\nmight learn\nuh to a problem which will look about\nthe same but might generalize wildly\ndifferently of distribution or break in\ndifferent ways and if we can actually\nunderstand what it's learned we'd be\nable to distinguish between these\nand the focus of this talk is not going\nto be on making a case for why I think\nyou should care about mechanistic\ninterpretability from a limit\nperspective\num my current favorite case for apps is\nthis post uh requesting proposals for\ninterpretability work from Chris Ola\nthough generally I feel like someone\nshould write up a good personal because\nit's been a while uh but my very brief\njustification is I think that a lot of\nthe limb problems basically evolve down\nto models could learn a solution that\nlooks like being deceptive and evil or\nlearn a solution that looks like Ashley\ndoing what we want these were extremely\nhard to distinguish for capable systems\nbut if we could reverse engineer them we\nmight have a shot\nand uh here's a cute iconic picture from\nsome work reverse engineering image\nclarification models where\nthere's a small subset of model where we\ncan identify neurons that correspond to\ndifferent bits of a car that then gets\ncombined into a car detection that's\nlike if their open doors on top and\nwheels on the bottom and a car body on\nthe bottom it's probably a cut\nand\na bit on the flavor of why you might\nwant to do mechanistic interpretability\num is\nso I think one thing that's just\nextremely hard about research in general\nbut especially alignment research is\njust\nyou really want feedback loops you\nreally want the ability to\nyeah you really want the ability to do\nthings and get some feedback from\nreality on have you done something that\nwas sensible or something that was\nincorrect\nand uh well it is very easy to trick\nyourself in mechanistic interpretability\nyou are ultimately playing with real\nsystems and writing code and running\nexperiments and the process of doing\nthis gets you feedback from reality in a\nway that I find extremely satisfying\nwhich I think also makes it easier to\nlearn about\num it is also my personal tastes just\nextremely fun\num sorry about people Matt's backgrounds\nand to me the feeling of doing\nmechanistic interpretability is some mix\nbetween maths because I'm reasoning\nabout this clean mathematical objects\ncomputer science where I'm thinking\nthrough algorithms or the model might\nhave learned various ideas behind\nmachine learning and really engaging\nwith the architecture of the underlying\nmodel\nNatural Science because I'm rapidly\nforming experiments forming hypotheses\nrunning experiments trying to get data\nand Truth seeking because I have this\nultimate grounded goal of forming troop\nLeaf support model\nand to my personal tastes this is just\nlike\nway more fun than anything else I tried\nin the lens and I see more about it and\nI think that a thing you should do if\nyou're trying to get into element is\njust travel to things and see what\nthings you find fun uh which things you\nseem good at and shoot this is actually\na fairly skipping weight\nand\nanother features that bit of advice is\nthat if you want to learn about\nmechanistic interpretability you should\nstart coding extremely early and write a\nlot of code and write a lot of quick\nexperimental code and get your hands\ndirty\nit's both the bar for entry for actually\nwriting coding running experiments I\nthink is notably lower than some other\nfields\nbut I'm also just giving this as actual\nadvice since I often observe people if\nyou try to get into the field who think\nthey need to spend weeks to months of\nreading before they write any code and I\nthink this can kind of triple your\nlearning\nand a final caveat to that and passion\nsales pitch is\num\nI'm slightly confused because uh it\nseems like way more people are into\ndoing mechanistic interpretability than\nthey were a year ago and it's like\nplausible that on the margin more people\nare interested in this than should be\nout of all people trying to get into\nalignments I'm kind of confusable to do\nabout this and I still want to give\ntalks about why mechanistic\ninterpretability is great how you can\nget into it and how you can contributes\nbut I feel like I showed at least\nvaguely flag that if you're almost\nindifferent between another era of limit\nand mechanistic interpretability\nforcibly it is less neglected in other\nareas though on the ground scale of\nthings everything alignment is\nridiculously neglected and this is\nterrible and\nyeah I'm confused about how to run into\nthat particularly sprays\num I should also say that in terms of\nphilosophy of questions for this talk\num the structure is intended to be a\nrapid tour where ideally you don't need\nto understand any section to understand\nthe next one and thus I'd ideally leave\nquestions to the end unless you think\nit's important for the flow of the talk\nwithin that structure\nso the kind of mechanistic\ninterruptability I care the most about\nis Transformer seconds the study of\nreverse engineering language models\nwhich in this case are all implement it\nas Transformers a particular neural\nnetwork architecture\num I\nand I care a lot about this because I\nthink that by far the most likely way to\nget AGI especially if it happens in the\nnext\n5 to 15 years is by something that looks\na lot like large language models trained\non something that looks a little like\ntransforms\nand it is wildly out of scope for this\ntalk to actually explain what\nTransformers are but I'm going to do my\nbest shot at giving a sub 5-minute\nWhirlwind tour\nso at a very high level the way that a\nTransformer like gbt3 works is its input\nis a sequence of words\nuh technically is a sequence of tokens\nwhich are like sub words but you don't\nneed to care about this\num the output of the model is a\nprobability distribution over possible\nnext words\nthat is we're just feeding a bunch of\ntext into the model and training it to\npredict what comes next in that text\na weird feature of the model is that its\noutputs actually has a probability\ndistribution over the next token for\neach token of the input sequence\nbecause the way the model is set up the\ncase outputs can only see the first K\ninputs so it called just cheats\nthe key thing here is just the model\noutputs probability distributions over\nnext words\nthe model's internals are represented\num by this thing called the residual\nstream which importantly is a sequence\nof representations\nthat is\nafter each layer\nthere is for every word in the input\nsequence there is a separate value\ninternal value in the network that is a\nseparate internal copy of the residual\nstream for that position of the sequence\num and this is a kind of 2D thing where\nyou have one for each input word and one\nfor each layer\nand unlike in classic neural networks\neach layer is an incremental update to\nthis residual string the residual stream\nis a running total of all layer outputs\nand each layer is just taking in\ninformation from it and changing both in\nit a bit rather than in a classic neural\nnetwork where each layer each layer's\noutput is just all that the model has at\nthat point\num and intuitively the way I think about\nbut intuition that I think is useful to\nhave when you're first learning about\ntransformance but it's not actually\nperfectly correct is that you can think\nabout the residual stream for some words\nas being the model's entire\nrepresentation of that word plus a bunch\nof context from the surrounding words\nand that as the layers go on the\nrepresentation of the residual stream\ngets more and more sophisticated context\nincluded\nand\nthere are two types of layers in the\nmodel that Each of which updates the\nresidual string the first is\nattentionless these move information\nbetween words\nand they're made up of heads each of\neach each layer is available with\nseveral heads Each of which can act\nindependently and in parallel and each\none is choosing its own\nidea of what information should be moved\nand a lot of the interpretability\nprogress has been trying to interpret\nheads\nand trying to interpret heads generally\nlooks like trying to understand in what\nways the model would want to move\ninformation around and how it needs\ninformation around\nand the second type of layer of the MLP\nlayers which stands for multi-layered\nperceptron\nand these do not move information\nbetween words they act in parallel on\nthe model's internals at each sequence\nposition and they process the\ninformation that the attention layers\nhave moved to that word\nand combines the model consists of an\nessential error MLP layer Etc stack up a\nbunch add each of these incrementally\npulls in more and more context to fit\neach words until at the end of the model\nit's able to convert that to a best\nguess what the next word will be\nuh thus ends my Whirlwind tour\num I have a what is a Transformer\ntutorial and a how to implement gpd2\nfrom scratch tutorial that you should\ntotally go check out and the internet is\nalso full of other people's attempts to\nmake Transformer tutorials if you want\nto learn this generally this is just a\nthing you really want to learn if you\nwant to do good making tup work\nthere are a bunch of the problems I need\nto talk about you don't actually need a\ndeep understanding of Transformers to\nget traction on\nbut hope that world and tour is\ninterested to people\nI am now going to move on to actually\ndigging into areas of concrete open\nproblems\nso\nthe first problem area\nis analyzing toy language so\nbye so buy a toy language model what I\nmean is you are trying to train a\nlanguage model analogous to Cutting Edge\nthing like gpd3 or chat EPT you keep the\ntraining setup of train on a load of\ntext and try to predict things weren't\nthe same\nand you keep all of the architecture and\ndetails the same but rather than having\nlike 96 layers like gp3 has you just\nhave one layer or\ntwo or three layers and sometimes you\nmight want to remove\nsome of the additional complexity in the\nmodel like having an attention early\nmodel\nwaiters remove the MLP layers and thus\nonly study its ability to move\ninformation\nand\nI think that toy language models are a\npretty exciting thing to study because a\nthey are just dramatically easier and\nmore trackable to study and try to\nreally get traction on reverse\nengineering but they're also complex and\nhard enough that I think we can learn\nreal things from them and that there's\nat least decent suggestive evidence that\nthe things we can learn do generalize to\nReal Models\na good case study of this is induction\nheads so induction heads are this\ncircuits we found in this paper I was\ninvolved in called a mathematical\nframework for Transformer circuits where\nwe studied toy attention only language\nmodels\nand we found that a thing that often\ncame up in language was some text that\nwas repeated in a sequence multiple\ntimes like there was just some there was\na form of a comment that someone quoted\nfurther down or a news article headline\nThat Was Then repeated in the text or\nwhatever\nand it's the model learns this\nfairly simple yet still somewhat\nsophisticated circuit where two heads\nwork together to say has the current\num has the current token appeared in the\npast if yes look at the thing that came\nafter that and assume it will come next\nand\nwe called these induction heads\nuh naively this might actually sound\nthat exciting but these turn out to be a\nreally big deal\num\ntwo uh such a big deal that we had an\nentire follow-up paper on them a\nuh two notable things about them is that\nthere's a they appear in basically all\nmodels who studied\nthere's a fairly narrow band in training\nwhen they appear and basically all of\nthe induction heads are pit\nhere's a pretty shop uh the yellow ball\nis the reasonable you go from no\ninduction heads to having induction\nheads\nand most excitingly they seem really\nimportant for this weird capability that\nlanguage models have called in context\nlearning where the model is capable of\nusing words far back in its context to\npredict what comes next\num this is analogous to say reading a\nbook and remembering some detail that\ncame up five pages ago in order to\npredict what's going to happen next\nthis is like kind of surprising that a\nnew look can do this and it turns out\nthat a doctor in the heads are just a\nreally crucial part of how the model is\nable to track these long-range\ndependencies and the narrow band of\ntraining when they appear exactly\ncoincides with a narrow bounded training\nwhere the model suddenly gets really\ngood at this\nand a further exciting thing about\ninduction ads is that we've made some\ndecent Traction in actually propose\nengineering what's happening in them I\nlook and try to explain this diagram but\nthis is copied from a group blog post\nfrom from Callum McDougall that is\nlinked in the slides that you can go\npick around in and try to read about how\nthese actually work\num I will flag it we are not actually\nhave a complete understanding of\ninduction hits and there are additional\nsubtleties and complexities that that\nwe're still kind of confused about\nbut the overall message that we take\naway from the section is just that\nthere are things that we are actually\nfairly confused about in toy language\nmodels yet they are also sufficiently\nsimple that we can get some real\ntraction on this\nand the the insights we learn genuinely\nseem to transfer to Real Models or at\nleast give us some useful insights on\nthem\nand a particular concrete problem that I\nwould really love someone listening to\nlike go and try to take on is\none of the big open problems in my\nopinion in mechanistic interpretability\nof Transformers is how to understand the\nMLP layers of a transformer how to\nunderstand the bits that are about\nprocessing the information in place\nrather than moving it between positions\nand I think a particularly promising way\nto get started on this would be to just\ntake a one layer Transformer with MLPs\nand try to understand just a single\nneuron there\nI assert they're just not an example of\na single neuron in a language model that\nwe can claim to really understand and\nthis seems kind of embarrassing\nand in the post accomplishing the\nsection I open source some toy language\nwalls I train so you can just go and\nplay\nthe next area of concrete open problems\nis what I call looking for circuits in\nthe Wilds\nwhere that is looking for looking at\nreal language models that weren't just\ntrained for simple that weren't just\ntrains in kind of like tiny toy settings\nand\ntaking something these models are\ncapable of doing\nand trying to reverse engineer some\ncircuits behind\num the work that most motivates this\nsection is this great paper called\ninterpretability in the wild from\nRedwood research which studied how gpt2\nsmall can do this particular grammatical\ntask of indirect object identification\nuh that is seeing a sentence like when\nJohn and Mary went to the store John\ngave the bag to and realizing that the\nsentence should be completed with the\nword Mary\nand\nthis is simultaneously a\ncomplicated task where it's like not\ntotally obvious to me how a model could\nsolve it but also\nuh not an incredibly hard or\nsophisticated or complicated task\nand interestingly it's also\nfundamentally a task about successfully\nmoving information throughout the\ncontext because you need to figure out\nthat John is duplicated you need to root\nthe words that are not duplicated to the\nfinal token and uh in their paper they\ndid this aerobic effort of reverse\nengineering that found this uh I believe\n25 head circuit behind how the model\ndoes this that roughly breaks down into\nusing some heads to figure out that John\nis repeated uh moving these to the final\ntoken with some X extra heads because\nagain the model has a separate internal\nrepresentation at each word in the input\nsequence and then having a cluster of\nthings that moves thing that moves all\nnames that are not repeated and predicts\nare those all complexed\nand\nI would just be really excited to see\npeople trying to find more circuits in\nReal Models\nand uh excitingly uh every time there's\na prop area I'm pointing to where I'm\nlike and here's some existing work that\nyou could be inspired by uh you can it\nsaves a dramatic amount of effort if\nthere's some existing work that you can\nCrypt from and useless awesome points\nand I made this uh demo notebook called\nexploratory analysis demo that\ndemonstrates what I consider to be some\nfairly straightforward yet powerful\ntechniques to get traction on what's\ngoing inside a model and use them to\nspeed run half red arriving the indirect\nobjected identification circuit\nand a concrete question that I think you\ncould get some good traditional\nanswering with a combination of my\nnotebook and the underlying Transformer\nlens library that I wrote that it uses\nto play around with models and the code\nbase Redwood put up with their paper is\nwhat does the indirect object\nidentification Circuit look like in\nother models\num\nthe library included lets you load in\nanother model by just changing some\nStacks at the top\num is it the case that the same circuit\narises is it the case that the same\ntechniques work is it the case that\nactually we need to basically solve from\nscratch and there's an incredible\nthere's like a new deep laborious effort\nevery time there's a new circuit who\nknows I think this is something that you\nthink attraction\nand is a question that I'm pretty\ninterested\num\nand I'll skip over this so so a new area\nof concrete open problems training\nattacks so\num\nit is just a fact about the universe\nthat\nfrequently when we take a neural network\nand we give it a bunch of data or some\ntask run and then we apply stochastic a\ngradient descent to it to a bunch it\nends up with a solution to the task that\nis reasonably good\num I'm extremely confused about why this\nhappens and I'm also very confused about\nhow this happens what are the underlying\nacts\num how does the model navigate through\nthe space of all possible solutions it\ncould fight\nis it the case that there's just some\nclear canonical solution but it's always\ngoing to end up in even if it's path\nthere are somewhat difference is there a\nlot of Randomness\num where it could just end up at\nSolution One or solution two kind of\narbitrarily is it the case that we can\ninfluence these training Dynamics or is\nit just really over determined and it\nwould break anything we try doing\nand I think that a particularly\npromising way to get traction on a lot\nof these questions is to\nuh train models that can demonstrate\nsome of these problems in a smaller more\ntrackable setting and then trying to\nreverse engineers\nand an example of some work that I did\nhere so there's this there was this wild\npaper from openai at the start of last\nyear on this phenomena of grocking so\ncrocking is this thing where if you\ntrain a small transformer on a some\nspecific algorithmic tasks in this case\nmodular division\nand you get it\nsay a half of the data as training data\nif you treat it on that task it will\ninitially just memorize the data it sees\nand do terribly in there it doesn't see\nthat if you keep training for a really\nreally long time in this case about a\nhundred times longer\nthan it took to memorize\nit will sometimes just abruptly\ngeneralize\nand to underline it suddenly generalizes\ndespite never seeing uh despite just\nlooking at the same data again and again\nand never actually having a chance to\nlook at the Unseen date\nand this is a kind of wild mystery that\na lot of people in the ml Community were\ninterested in\nand yeah this worked I didn't I trained\na even smaller model to do modular\ntissue and instead really hard at its\nweights and found that it was doing this\nwild treat based algorithm\nwhere it learns to think about\nuh modular Edition as rotation around\nthe unit circle it learned a lookup\ntable but converted the inputs to uh\ninputs A and B to rotations by angle a\ntimes constant and B times constant\naround the circle it learns to compose\nthose uh to get the rotation a plus b\nand then to get out the correct answer C\nit learns to\napply the reverse rotation C and then\nlook for which C got a background\nstopped\nand because that's around a circle it\nautomatically gets modded\nand uh you really don't need to\nunderstand the specific details of this\nalgorithm but it is just incredibly wild\nin my opinion this is actually a thing\nthat the model looked\nand if we look at the model during\ntraining we can see using or\nunderstanding of the circuit we can see\nthat\neven during this Plateau\num\nthe model uh even during the period\nbefore actually generalizes the model\nhas shown significant internal structure\nto its await some representations\nand can basically just use your\nunderstanding so look inside the model\nand see what's going on during training\nand a concrete problem here but I'd be\nexcited to see someone pick around that\nis in my crocking work we Dove really\nhard into reverse engineering this\nmodular Edition\nbut I also exhibited grocking in a few\nother contexts like five digit Edition\nor\nmodel trains to have induction heads on\na small algorithmic task and these also\nseem to grow and I would love for\nsomeone to take say the induction heads\ntask try to really dig into what's going\non there and see if you can use this to\nunderstand why that one Crooks and see\nhow much these insights compare\nall right\num\nnew area of urban prompts we didn't\nfollow the previous ones you can just\nstop paying attention to get hit uh\nstudying neurons\nso our best understanding of how models\nhow neural complicated neural networks\nwork\nis there a lot of what they're doing\nis they are learning how to compute\nfeatures\nwhereby feature what I mean is\nsome function or property of the inputs\nfor example this token is the final\ntoken in Eiffel Tower this bit of the\nimage contains a dog ear uh this image\nis about Christmas uh this number\ndescribes a group of people\nand\num\nthis is our best guess for how models\nwork and\nI would really like to get a much better\nunderstanding of what kinds of features\nactually get learned in languages and\nwhere do these get ones\nbecause what seems to happen in say\nimage models which I said which we've\ngot a much better handle is that those\nmodels seem to develop specific families\nof simple features that get combined to\nmore and more complex features until\neventually you get really sophisticated\nand high level once\num here uh some great neurons from this\npaper called multimodal neurons which\nstudied neurons in a model called clip\nwhich contains both image and text pots\nand they found all of these wild\nhigh-level neurons like a Donald Trump\nin Europe or a Catholicism Euro or an\nanime neural or a teenage neuron and\nit's like what\nand uh we are much less good at reliably\nfiguring out what neurons correspond to\nand languishables but the scope I'm\naiming for in this area is more just\ntry to look at what neurons represent\nand use this to get a better handle on\nwhat kind of things the model is\nactually learning even if this is\nneither complete nor\nyeah even if this is not complete snow\nfully rigorous I think we would just\nstill learn a lot\nand a pretty janky but effective\ntechnique here is this idea of looking\nat Max activating data examples that is\nyou pick a neuron you feed a bunch of\ndata through the model and you look at\nwhich text most activated that new\nand if the top 20 bits of text\num sing to a bunch of them seem to have\nsome consistent pattern that's\nreasonably good evidence that's the\nmodel has actually learned that pattern\nis a feature\num and there's this great paper called\nsop Max linear units from anthropic that\nI was mainly involved in where\num one uh this isn't actually focus of\nthe paper one of the things they do is\njust explore what kind of things are\nlearned by neurons and we have things\nlike this a64 neuron which observes that\noften you get strings that are written\nin base64 and that if this happens\nthere's a bunch of specific tokens that\nare more likely to occur than otherwise\nand a bunch of common words that are\nreally unlikely to occur\nthere's uh disambiguation neurons\num so this is a family of neurons which\nlook at the token D and observe that D\ncomes up in a bunch of different\nlanguages and contexts and it's the same\nkind of text in each of those but in\nAfrikaans or English or German or Dutch\nyou want to think of it as a completely\ndifferent thing and the model seems to\nlearn separate features for D in Dutch D\nin German\nEtc\nuh there's also more abstract and\nsophisticated\num here's a neuron which uh in the\nmiddle layers and much larger model\nwhich looks more numbers that implicitly\ndescribe groups of people\nand uh\none particularly one particular notable\nthing is that the way language models\nwork is they can only use text that came\nbefore rather than afterwards so it knew\nthat say 150 or 70 was going to describe\nguests despite this not occurring before\nand I made this somewhat janky and under\nconstruction tool called neuroscope\nwhich just collects and displays the max\nactivating data so examples for a bunch\nof neurons uh for well all neurons and a\nbunch of language models and I think an\nextremely low barrier of Entry project\nwould just be go poke around it there\nand see what you can find\nI'm particularly interested in seeing\nwhat kind of\ninteresting abstract Concepts do models\nseem to learn around the middle layers\nof larger models\nand I'd be particularly excited for you\nto then go and try to validate this\nunderstanding either by actually reverse\nengineering something in a smaller model\nor just by feeding in a bunch of texts\nthat you generated yourself to the model\nand seeing what hypotheses you can\nconfirm or falsify about what text\nshould activate that\nand just seeing how reliable pattern\nspotting in text that activates a model\nactually is\nuh again if you go to neilmat.io cop\nDash slides you can find these slides\nall of them any licks in there\nall right next category open Pro poly\nsemanticity and superposition\nso\none thing that I somewhat glossed over\nin the previous uh category is this core\nproblem in interpreting neons\nwhich is the we studied a policeman test\nso\nit seems to be the case that\nwhile in image models neurons often seem\nto represent single Concepts uh\nsometimes they didn't so here is one of\nthe neurons studied in the multi-metal\nneurons paper\nand this is a technique called feature\nvisualization that roughly gives you\nthis funky psychedelic image that\nroughly models\nwhat that neuron is mostly looking for\nand we see this kind of playing card\ndice style thing we might think it's a\nkind of game in here\nbut it turns out that if you look at the\ninputs that most activated uh half of\nthem are about games or cards or stuff\nlike that and then half of them are\nabout poetry for summaries and it's\npossible that there's just some Galaxy\nbrain thing here where poetry and dice\nand cards are all collectively some\nhaired feature that's useful for\nmodeling the task that I'm totally\nmissing uh but\nthis seems to happen all of the time in\nlanguage models and is incredibly\nannoying\nand it's generally seems it just seems\nlikely to be the case that models have\njust decided to represent multiple\nthings in the same year\nor at least kind of\nrepresenting things spread across\nminions\nand our best guess for what's going on\nhere is this phenomena called\nsuperposition\nuh theater of superposition is that a\nmodel tries to represent more features\nthan it has Dimensions by squashing\nthose features into a lower dimensional\nspace\nand this is kind of analogous to the\nmodel simulating A much larger model\nwith many more parameters and features\nuh which is obviously useful because\nbigger models can represent more things\nand that the model has decided that\nsimulating a larger model with a bunch\nof interference and noise is good and\nworthwhile and a sensible trade-off and\nif you have more features then you have\nneurons you obviously can't have a\nfeature per neuron and you're going to\nhave to be compressing and squashing\nthem in in some complicated way\nand anthropic had this awesome paper\ncalled toy models of superposition\nwhich looked at can we build even a toy\nmodel that just actually provably learns\nto use superposition and is it the case\nthat there is it is ever that is it ever\nthe case that superposition is\nabsolutely useful to which the answer is\nyes yes it is it's very annoying\nand uh so what they did is they just\nbuilt this uh really really simple setup\nwhere you have a bunch of features that\nare added to the model they need to be\nsquashed down into some low dimensional\nspace and here you have five input\nfeatures abaniti squashed into a\ntwo-dimensional space and then\nwe measure how well the model can\nrecover these\nand it turns out that if the features\nare there all the time such that they\ninterfere with each other a lot the\nmodel just learns a feature per\ndimension\nif they aren't there as often uh sparse\nt means\nagent at the time a feature is just set\nto zero the model decides to squash two\nper dimension\nand if they're even less frequent\nthe model decides to squash five into\nthe two dimensions in this pretty\nPentacle configuration\nand uh if you dig more you find that uh\nthis thing wasn't just a spontaneous uh\nthing that occurred because uh we had\njust two hidden Dimensions uh it turns\nout if you make the model squash things\ninto more hidden Dimensions they\nspontaneously self-organize into classes\nof features that are each in their own\northogonal Subspace and that if you put\nthings they can form energy levels\naccording to\num what configuration they had like some\nmodels have tetrahedra where four\nfeatures fit into three dimensions uh\nsome models have a mix of triangles with\nthree and two and a typical pairs with\ntwo and one\nand I'm\nnot really going to try to explain this\ndiagram properly you should totally go\ncheck out the paper if you're curious\nbut some open questions here but I'm\npretty excited about people exploring\nthe first is just this paper makes a\nbunch of predictions about what\nsuperposition might actually look like\nin models when it might occur what it\nmight look like uh can you just go and\npick her out of the model and get any\ntraction on figuring out whether any of\nthese are\num\nin particular if you start with one of\nthe circuits we've already got a lot of\ntraction on like induction heads or\nindirect object identification I think\nyou might actually be able to get some\ninteresting data\nand the second category of prop is I\nthink there's lots of things I'm\nconfused about to do with how models\ncould do computation in superposition\nand how models can kind of squash more\nfeatures that they have neurons and then\napply some non-linear function to\nprocess information and they get\nsomething useful at\nbut it seems like they can do this and\nthe paper very briefly explores but in\nmy in the relevant concrete open friends\npost I try to spell out a bunch of other\nangles I might take and things I'm still\nconfused about\nand yeah\num I generally think that dealing with\nsuperposition is probably the biggest\nopen problem in mechanistic\ninterpretability right now\nand I am very excited to see if we can\nget more progress in\nall right new category of problems if\nyou so doubt it's all pay attention\nagain techniques and automation\nso fundamentally the\nthing we are trying to do when reverse\nengineering models is to form true\nbeliefs about modeling tones\nforming true beliefs is hard\nand one of the main things that lets us\nget anywhere on this is having good\ntechniques having well understood\nprincipled approaches that we can apply\nthat will actually help us gain some\ntraction and gain Summit insight into\nwhat's going on inside\nand I think that there's lots of room\nfor Progress here building uh building\nnew tools building a better and more\nreliable toolkits building\na better understanding of our existing\ntoolkits\nuh one concrete example of this is so a\ntechnique that's pretty popular in some\nCircles of interpretability is ablations\nthat is you pick some bit of a model and\nyou set this bit of the model to zero\nand you check okay I've set this bit of\nthe model to zero\num How does its performance on a given\ntask change\nif you set a bit to zero and the\nperformance tanks then probably that bit\nmattered if you had a zero opponents\ndoes a tank then probably perform it\nthen probably that it didn't\nis a naive thing that it's very\nreasonable to think that's something\nthat I thought but one weird ass outcome\nover the interpretability of the wild\nwalk was they found these things called\nbackup pets\nwhere it turns out that when you delete\ncertain important heads other heads in\nthe next layer or two change their\nbehavior to near perfectly compensate\nand it's like what\nso what this graph shows is that if you\nlook at the direct launched attribution\nof a hat that is uh the effect of that\nhead on the output of the model\num\nWhat You observe\num is that when you delete this\nparticular important heads uh so the\noriginal effect is like three after you\ndelete it it's zero because it's deleted\num two other heads which are really\nimportant just kind of stop-map it to\nsignificantly change the behavior and go\nfrom really big effects to fairly small\nnegative effects and a fairly small\nvisual effect to pretty big next effect\nand it's like what\nwhy does this happen I would not have\nexpected this to happen I'm extremely\nconfused I would really like to get a\nmuch better understanding of what is\ngoing on here and what kind of\npathological cases might exist for other\ntechniques I think are cool\nand one concrete Urban problem here is\njust how General is this backup\nheadphone\ncan you find backup previous token heads\nwhich are 10 to the previous token uh\nonly if an earlier previous token has\nits deleted can you find backup\ninduction heads or backup duplicate\ntoken heads uh I have no idea I would\nlove someone to just go and look\num or are backup heads purely results of\ntraining models with dropouts uh\ntechnique where you just randomly set\nsome bits to zero during training try to\nmake it robust to that\nI have no idea\num a yeah and the exploratory analysis\ndemo that I made includes a bit at the\nend where I replicate the backup name of\nour head results to hopefully save you\nsome efforts\nuh I'm generally trying to emphasize\ndemos and tooling because if you're just\ngetting started in a field and you want\nto try making traction it's just really\nhelpful to have something you can crib\noff of and copy from profaneed and saw\nfrom scratch\num also if the thing does not succeed at\nbeing useful and you still need to write\na little things from scratch please let\nme know I will attempt to fix it\num a different kind of thing that I fit\ninto this broad category of techniques\nand automation is\nautomation\num one of the most common and in my\nopinion fairly legitimate critiques of\nmechanistic interpretability is that a\nlot of these successes people have had\nare\nvery labor attentive and maybe they've\nworked on small models and maybe they'd\nwork in printable and large models but\nwhere it would just take like years to\nreally get traction on actually fully\nunderstanding a incredibly large and\ncomplicated model and where the bigger\nmodels get the mobile height the more\nand more intractable it will see\nand\nI'm pretty excited to see effort trying\nto take\num the techniques and know-how insights\nthat we have and trying to think through\nhow you could actually scale these and\nautomate these\num one very simple dumb example I made\nis this thing that I call an induction\nMosaic where so a nice property of\ninduction heads is that they work even\non sequences of purely random tokens if\nthe sequence says repeated\nand\nif yeah it works on the sequence\num even if\nit works on a sequence of random tokens\nyou got the random tokens that they use\nrepeater and it always attends to the\nToken that immediately after the first\ncopy of the current\nand this is an incredibly easy thing to\ncode and then just check for\nand this is a heat map where the y-axis\nis which heads the x-axis is which layer\nand the color is how induction is that\nhits and we can see that across 41\nmodels there are induction heads at all\nof them and we can even observe some\nhigh level patterns like most models\nonly have induction heads in the second\nhalf\napart from gptj for some reason why does\nthis happen I have no idea\num and I made this mistake using my\nTransformer lens Library which has the\nfun feature that you can change which\nmodel you have loaded by just changing\nthe name of the model in the input\nand I would really love for someone to\ngo and make go and just do this for all\nof the kinds of heads we understand\num EG all the heads in the indirect\nobject identification circuit are the\nsimple times like duplicate token and\nprevious token heads and then just make\na Wiki where for every head in\nall open source language models we just\nhave a brief summary of what we can what\nwe know about it I think this would be a\nreally cool thing to exist that just\nwouldn't be that hard to make\nand\nmore generally I think there's a large\namount of scope to take the stuff that\nwe already know and already got and to\nthink about how you can distill it to\ntechniques or how you can to sell it to\nthings that are more scalable and\nautomatable\nthough I do personally yeah\nsorry uh sorry to interrupt there's a\nquestion that was asked a little bit ago\nuh we're a few steps removed here\num Lisa is wondering\num a paper you mentioned a little bit\nago if it could be interpreted as model\nredundancy she says did this paper also\nlook into what makes a head count as an\nimportant Head worth creating a backup\nfor I don't know if that's still\napplicable or not but uh\nthe answer is no they had not looked\ninto this but you could look into this\nand yep\num who knows I'd love to see the answer\nto that\num\nthis is just another data point in the\ngeneral Vibe I want to convey if there's\njust so many concrete open problems here\nbut I don't know the answer to I'd love\nto know the answer to\num all right uh final area of concrete\nUrban problems\num that's supposed to end up half past\nor the hour\nand only assume talks are an hour long\nby the calendar event till the hour\nwell no one else is me I'll just assume\nthat it's till the hour\num all right so\nbut I made a wrap-up scene anyway\num all right so you're welcome to go\nlonger we have like a slido uh for Q a\nquestions as well I'll post to the slide\nafter you're done yeah the timing is up\nto you\nyeah if you could post a slider in the\nslack and zoom chat now so people can\nlike stop putting in questions that'd be\ngreat\nunless you've already done that all\nright so final area of concrete Urban\nproblems I wanna okay is interpreting\nalgorithmic models that is train a model\non some synthetic algorithmic task and\nthen go and try to reverse engineer what\nhappened what algorithm did the model\nlearn\nand\num\nI\nthink obviously this is even more\nremoved\nfrom actually being able to reverse\nengineer Cutting Edge models than the\ntoy language models that I was talking\nabout earlier so you kind of want to\ncheck that we're doing is actually\nremotely useful\num one angle is that if we can it's a\nlot easier to really understand what's\ngoing on in an algorithmic model where\nthere's a genuine ground truth\nand this can Subs a testbed for us to\nexplore and understand our\ninterpretability techniques\nit can also be useful to build toy\nmodels to simulate things that we\nbelieve a lot of models to be doing so\nwe can try to understand them\nfor example\num it's plausible to me that a useful\nway to upon traction on the\ninterpretability of the wild work might\nhave been to train a toy model to do a\nsynthetic version of that task seeing\nhow small you can get it and see what\nthat model did\num and I think things like my groffing\nwork and the toy models are\nsuperposition are for the things in the\nspeed\num but all that is kind of\nrationalization the actual reason that\nI'm recommending this is I think it's\njust a really easy place to get started\nthat will help you build real intuitions\nand skills\nand that just pick a model on any\nalgorithmic task and then try to get\ntraction on it is just a pretty great\nfirst project even if it isn't actually\never really useful\num and a demo tutorial that I made is I\njust um so I was curious if you don't\nexplicitly tell and model anything about\nwhich position things in the sequence\nare uh can it red arrive these\nfor sophisticated reasons\nthe way Transformers work is that they\nkind of treat each pair of positions\nequivalently unless you explicitly hack\nthem to care about social information\nand it turns out that if you don't tell\nanything about positioning but you only\nlet It Look Backwards a two layer model\ncould just learn to read or write its\nown positional embeddings if trained on\na really simple dumb task and I released\na curl up notebook along with this\num with the code relevant codes but I\nalso decided to record myself just\ntrying to do research on this for like\ntwo hours\nand hopefully this is a good model to\ntry to build off of if you want to go\nand try to interpret algorithmic toss\nall right so those ends the Whirlwind\ntour are the areas of open problems\num some concrete resources to learn more\nmy 200 concrete open problems and\nmechanistic interpretability sequence\nuh though I really have not been\ncounting and I tend to add more lines on\nmy edit so it could be like 300 at this\npoint if anyone's account let me know\nget my guest\num\na guide on how to get started in\nmechanistic interpretability\num I generally recommend starting with\nthis one unless you just feel really\npsyched about jumping straight into a\nnatural open problem I try to give comfy\nsteps with success criteria and goals\nand to generally guide you through the\ncommon mistakes I see people make like\nread for weeks to months before they\nwrite any codes or uh be really\nintimidated by specific things that\naren't actually that hard\ndiverts a comprehensive mechanistic\ninterpretability explainer\num which is basically an enormous brain\ndump of ideas that I and Concepts and\nterms the jargon that I think you should\nknow if you're trying to learn about\nmechanistic interpretability with\nextremely long tangents examples\nmotivations and intuitions uh I think\nit's both a pretty productive thing to\njust read through to get a sense of the\nfields if you're in the mood for a long\nhaul or as a thing to just search\nwhenever you come across an unfamiliar\nterm\nand finally my Transformer lens Library\nwhich\nattempts to make it so that if you are\ntrying to do some research on a question\nthat involves reverse engineering a\nmodel like gpd2 I try to make it so that\nthe gap between have an experiment idea\nand get the results is as low as\npossible by trying to design an\ninterface that does all of the things I\ncare about doing as a researcher as fast\nas possible\nand anecdotally other people seem tools\nthey find this useful though who knows\nbut uh doing mechanism mechanistic\ninterpretability with their\ninfrastructure is pay so hopefully this\nhas helped\nand yeah zooming out a bit I kind of\njust want to give the high level framing\nthat\nthe spirit I want to convey in this talk\nis mechanism interpretability is a\ninteresting a live field that just has\nlots of areas where people could add\nvalue add lots of things we don't know\nbut I would really like to understand\nbetter and that the bar to entry for\nengaging with these and making progress\non them is not that high\num I also want to make sure I also\nconvey that through research is just\nactually fairly hard and you should go\ninto your first research project\nexpecting that you're going to get a lot\nof things wrong you're going to make a\nlot of mistakes\nultimately the thing you produce\nprobably won't be like\ngroundbreaking research and it might\nhave some significant flaws especially\nyou know how mentorship\nbut also but the best way to learn to\nget to a point where you can do useful\nwork is just by trying and by getting\nyour hands dirty getting contact with\nreality and actually playing around with\nreal systems and having some clear focus\nand prioritization of this is an open\nproblem that I want to understand let me\ndirect my learning and coding and\nexploration so that I can try to get\ntraction on this is in my opinion just\none of the best ways you can learn\nhow to get better at doing real\nmechanistic interactability research and\nI think we'll have at least some chance\nto just doing real research in German\nand\nyeah\num\nI try to look to a bunch of resources\num I will also just emphasize again that\nit is plausible to me that too many\npeople are trying to do mechanistic\ninterpretability and if you feel\nequivalently drawn to other things you\nshould totally go explore those but if\nyou're going to try to explore\nmechanistic adaptability please try to\ndo it right and I hope these various\nresources and framings are helpful to\nthat\nall right I am going to end the main\npresentation here but I'm very happy to\ntake a bunch of questions for a while\nuh question one do I think\ninterruptibles your research into model\nbased RL be useful any cool previous\npapers in this Eric\num\nSo my answer to most questions about do\nI think interpretability Research into\narea X would be useful is yes because I\nthink that the amount we understand\nabout that works is just pathetically\nsmall and that if we can understand more\nthis would be great\nand yeah\nuh this seems like a thing that I would\nreally love to have more inside it too\nuh if I'm engaging with the question\nmore from the spirit of prioritization\nwould I rather have interpretability\nResearch into model-based RL rather than\nother areas uh sorry little based around\nthis jargon the people who are familiar\nuh RL is reinforcement learning which is\nthe study of if you train a model to be\nan agent that is taking an action on\nsome environments and you give it some\nrewards saying you have done good things\nor you have not done good things\num how could it learn strategies to get\ngood rewards and I believe that model\nbased specifically though I'm not an RL\npersons this could be wrong is when the\nmodel uh when part of the model is\nexplicitly designed to simulate the\nenvironment it's in\nand using that in order to form better\nplans and strategies to get more reward\nand yeah so\nfundamentally I think that the\nthere are two ways I would think about\nreinforcement learning interpretability\nbeing useful\nthe first would just be\nI do not think we understand\nreinforcement learning at all I would\nreally like to be way less confused\nabout this and I think that just\naggressively optimizing for whatever\nseems most attractable\num just to learn anything at all seems\ngreat and this can look like simple\ntasks this can look like\num\nsmaller models this could possibly\nhaving an explicit model-based thing\nwill be even easier\nand then the second angle of why I care\nabout interpreting RL is because lots of\nquestions and consensual alignment\nfundamentally revolve around things do\nwith how models\ndo things\nand yeah fundamentally around how the\nmodels do things\nand what does it even mean for a model\nto be an agent or pursue girls\ndo models have internal representations\nof goals at all are models capable of\nplanning who knows and\nI\ndon't have strong takes with a model\nbased or model free RL is the best way\nto approach that\nand finally\num the actual thing that I care about\nwith all the interpretability research\nis will this actually be useful for\nwhatever AGI looks like and\nthe popular way of using RL on Cutting\nEdge models today is reinforcement\nlearning from Human feedback which uses\nan algorithm called proximal policy\noptimization or PPO and to my knowledge\nPPO is not always\nwhich makes me less excited about model\nbased Focus\nand I'm not aware of any cool previous\npapers\num I do actually two days ago I made\nlook yesterday I made a post on the\nsequence on reinforcement learning that\ntried to give some takes on how I think\nabout reinforcement learning in general\nhow I think about interpreting real\nsource\nreinforcement learning in general which\nis hopefully useful to Checker\num is the next question\nis there any research or recommendations\nyou have to look into automating circuit\nDiscovery in Los Angeles\nso\nlet's see\num there's two things that I think are\nparticular so one thing I'm pretty\nexcited about here is Redwood research\nhave this sequence on this algorithm\ncalled causal scrubbing which is their\nattempt to make something that is\nactually automatable uh sorry that is\ntheir their attempt to make an approach\nto\nverifying that something is actually a\ncircuit that can just be automated and\nis also highly progress I'm very excited\nabout color scrubbing building\ninfrastructible scrubbing scaling it up\nchecking how one-up works that they're\ncurrently running their large scale\nremix research Sprint style program\nwhich is probably going to get a bunch\nof data on that but I definitely check\nout causal scrubbing\nand I would also\num Arthur Conley who is a researcher at\nredwoods uh recently had this post on\num\nautomatic circuit Discovery where he was\ntrying to write some codes he was author\non the interpretability of the wild\npaper and was writing some codes to try\nto automate uh How does it go with that\ncircuit having to go to this that\nclosely but I think it's pretty cool\num more generally\nso my high level philosophy of\num automating interpretability is\nI think the common uh thing that I\nfrequently see in people getting into\nthe field which in my opinion is\nsomething of a mistake is that people\nseen mechanistic interpretability and\nthey're like this is cool but it's so\nlabor intensive I'm not interested in\ndoing a project that involves labor\nintensive approaches to discover\ncircuits and I want to fully jump to\nautomation or incredibly scalable things\nand I think there is a valid spirit I do\nthink that having things that are\nautomated or scalable is extremely\nimportant\nbut I also think that\nwe are significantly more constrained by\nactual true knowledge about networks and\nnetwork internals then we are\nconstrained by ideas for techniques that\nmight scale and might automatically find\nthings\nand\nin my opinion the right way to do work\non discovering automated ways to do Mac\nand tup is by starting with examples we\nunderstand like pretty well\num trying to distill this and find\ntechniques that could uncover this\nfaster\nAEG are there ways you could\nautomatically discover the\ninterpretability in the wild circuits\nand then it's trying to scale up these\ntechniques by finding novel circuits\nand in the process of doing so use this\nto validate how well your Technique\nactually works with a bunch of more ad\nhoc and labor-intensive approaches on\nthe side\num\nhis non-mechanistic interpretability a\nthing uh yes I think it would be fairly\nrude of me if someone left this talk\nthinking that the early back the early\nadaptability that existed is mechanistic\ninterpretability uh interpretability is\na large and thriving field of Academia\nthat I\ndo you not feel qualified to\ngive clear summaries of\num generally there are just lots and\nlots of things that people do some high\nlevel categorizations would be things\nlike uh there's Black Books\ninterpretability techniques which just\nlook at the model's input and output\nand uh maybe differentiate through the\nmodel and try to generate explanations\nfor why it produces this output\num\nthings like CLT maps are things that try\nto figure out parts of the inputs\neffects and outputs uh there's the\nentire field of explainable AI which\noften judges their metrics their metric\nof success as how we produce an\nexplanation that is useful to users I\ndon't feel that excited about these\nareas especially from a limit\nperspective\nthere's also a lot of areas of academic\ninterpretability that is trying to\nreally engage with models and Top Model\nlatels\num\nthere's this good survey paper called\ntowards transparent AI that I just\ncopied into the zoom chat\num if someone could collect these links\nso like put them in slack or something\nafterwards that would be useful\num\nyeah\num do I have nothing to say on this\nyeah I mean mechanistic is kind of a\nfuzzy word\num there's mechanistic as a de facto\ndescription of the community of people\nwho work on what I call mechanistic\ncontraptability which tends to be people\nin the Olympic Community it tends to be\npeople at industry Labs or non-profits\nlike redwoods and there's in contrast to\nthe things that actual academics and\nacademic Labs or people in more\nmainstream bits of animal research do\nand then there's the actual research\nphilosophy of what it even means for\nthem to be mechanistic or not and it's\nvery easy to get into stuff that's what\nhas\num I also recently got into a bunch of\narguments on Twitter with uh people in\nmore conventional academic\ninterpretability about what even the\ndifferences of mechanistic stuff were\nand you should go check out my Twitter\nreplies if you want a lot of in the\nweeds discussion\nall right uh what do I think about\ntrying to interpret well reward models\num that seems great I really want to see\ncompelling results trying to interpret\nreward models uh I was not actually\naware there were papers doing that which\neither suggests those papers operate\nGoods or just that my ability to hear\nabout papers is kind of bad so\neveryone's interested in if I've ever\nread that's interested in emailing me\nthose papers I'd appreciate that\num generally I'm just very excited about\ntrying to interpret rule models\num for context the\nuh prevailing way that language models\nare trained with grain force of learning\nis this technique called RL from Human\nfeedback where the human operator gives\nthem some feedback and they're trained\nto optimize it and reward models are a\nsub part of that approach where\nthe sorry they're a subpart of that\napproach where what happens is that the\num\nlanguage model has a separate thing\ncalled a reward model I say separate is\noften just an extra heads on the model\num on the model outputs\nand this predicts what a what feedback\nof human would give and that the human\nfeedback is used to make sure the reward\nmodel is in sync\nand the model actually learns from the\nreward model\nwhich is very janky\nthat is the main way that using human\nfeedback is remotely economical because\nhumans are expensive in a way that gpus\nare not\num also humans are slow in a way that\nGPU symbols\num yeah\ninterpreting Role Models great probably\npretty hard\nI would definitely start with some kind\nof toy model in a simpler setting\num so a nice thing about the reward\nmodels of language models is they're\nbasically exactly the same network but\nwith a different like unembetting Matrix\nat the end and so hopefully a lot of the\ninterpretability work will transfer\nuh one fairly ambitious project that I'm\nKeen to see someone do is to take a tiny\nmodel like a one layer language model or\na four layer language model and try to\nsimulate RL from Human feedback using I\ndon't know a larger model to just\nautomate what that feedback should be\nyou probably don't even need g53 I'm\nsure gpd2 could do that\nand I totally not booked out of my head\nbut like the right task here is ETC but\nI expect I'd be excited to see what\ncomes out of this\num\ncool next question hey do you expect\nlarger models to outgrow the need to\ncreate polysomantic neurons\num my yes is that\num also I observe the question below\nthat says what makes interpretability\nresearch less promising for alignments\nuh I don't really understand what this\nmeans and it'll be useful if you could\nput a clarification on the zoom shots uh\nin particular what I said previously was\nthat I think uh Black Box explanation\nbased interpretability research is less\npromising for alignments rather than\nacceptability research in general\nbut you might want to put a\nclarification before I get to it\num all right case question do you expect\nlarger models to outgrew the needs to\ncreate poly semantic neurons\num\nso\nI think there's two questions here that\nI want to disintegrate\nthe first question is\nshould a more capable model that also\nhas a bunch more neurons\nno longer need polysomatic neurons and\nI'm like no uh\nseems extremely unlikely\num\nthe the\nthere's this useful intuition they're\nexploring the toy models paper called\nthe feature importance curve where you\ncan imagine just having a graph where\nthe x-axis is just the set of all\npossible features that it can ever be\nuseful to model about inputs from things\nlike this text as an English versus\nFrench to I don't know\num the content of Neil Landers Serene\nMatt's talk\nand\num\nyeah so\na thing\nthis basically seems like it should be\nnot quite infinite but like\nincredibly long tailed\nand the features that are further and\nfurther outs are less and less useful\nand less and less frequent but it's all\nbetter than nothing\nand so we should expect that if models\ncan they're going to want to represent\nas many of these as they can\nand this seems like a thing that's still\ngoing to result in placement just\nif you mean holding capabilities fixed\nbut giving them way more parameters can\nthey outgrow the need for polysomatic\neuropes I don't really know\num\nthis is also just pretty hot too because\nif you give something more neurons\nyou're inherently giving it more\npremises uh but a thing that I'm kind of\ncurious about is what happens if rather\nthan having\nfour neurons per residual stream\nDimension the model has like a hundreds\nuh entertainingly if you look at the 11\nbillion premise and model T5 for some\nunknown reason they trained it with uh\n65 neurons per residual string dimension\nI have no idea why actually 64. I have\nno idea why but it might be an\ninteresting thing to study to see if\nthings are less policematic then though\nT5 is an encoder decodable which are a\nmassive pain for other reasons\nall right next question can we rule out\nthat the neurons that seem weird EG\npoetry games are just being looked at in\nthe wrong basis of the layer activation\nspace\nso\nI do not think that we currently have\nenough evidence to confidently roll that\nout\nthough I don't think it would be\nincredibly hard to at least collect some\nevidence for that in a one layer\nlanguage model\num a conquer different problem so I'm\nwatching this might be a contract\num so\nyeah the so the two things you want to\ndisentangle here are is it the case that\nthe model has uh as many features as\nDimensions but rather than having a\nfeature Direction correspond to a neuron\nit corresponds to some weird things\nspread across a bunch of neurons\nand\num I don't know\nand then on the flip side there's the\nquestion of of them all features than\nDimensions such that even though I've\nalready wants to align them with neurons\nthey just cut because there's more than\nmore features than neurons and my guess\nis that if there isn't superposition the\nmodel is just gonna really want to\nyeah it isn't super possession I guess\nas the model just wants to align\nfeatures with neurons because in the\nmodel has features framework the thing\nthat we're expected to happen is that\nthe model\nthe thing that we're expecting to happen\nis that the model wants to be able to\nreason about each feature independently\nwithout interference between the two\nand uh the way that non-linearity is\nlike a brother and jelly worked is that\neach neuron is affected purely\ndependently from the others but that as\nyou combine them but that if a features\nif there are multiple features in a\nsingle neuron then those features will\ndramatically interfere with each other\nand this is a pain\num\nbut yeah this is all kind of just\nconceptual reasoning I think that the\ntoy model is a superstitioned paper is\nlike pretty good evidence that\nsuperposition might happen but I want\nsomeone to go and check\nall right there's a sudden extremely\npopular question on can mechanistic\ninterpretability be used for detecting\ndeceptive alignment in models\nso two possible ways to interpret this\none way to interpret this is can we use\nit right now to detect a steps of\nalignment and I'm like uh\nMaybe\nprobably not we're not very good at it\nuh we might be able to like get some\ninsight into what's going on by just\nlooking at the model on a bunch of\nexample prompts and poking around to see\nif we can get any traction but like I\ndon't know it seems hard man\num if I interpret this as do I think\nthat we could get to a point where we\ncan use it to detect a substable items\nuh yes I feel pretty excited and\noptimistic about this\nand I think that the worlds where\num mechanistic interpretability is\nextremely useful is Worlds where it can\nhelp us get traction on detecting things\nlike this the\nkey path would just be\ngetting Goods at uh truth tracking and\nscalable ways to\nsorry getting good at truth tracking and\nscalable ways to detect which bits of a\nmodel are most relevant to producing\nspecific outputs and finding specific\ncircuits Within These putting the model\ninto a context where it could be being\ndeceptive and using these to try to get\ntraction on what's going on and why and\nhopefully within a few years the field\nwill be in a much more sophisticated\npoint where that story seems kind of\nboring and nice\nand\nI think that there's a bunch of ways\nthat you could raise the stakes and push\nback on this and be like but if the\nmodel was truly deceptively aligned then\ncouldn't it just think it's thoughts in\na way that couldn't be mindfulness and\nI'm kind of skeptical of this just\nbecause I think the\nmodels get good at being deceptive\nbecause they have a bunch of feedback on\nwhat they output which outputs are good\nversus bad but they have basically no\nfeedback on their internal\nrepresentations\nhow legible they are and how much they\ncan be interrupted\num it's possible there are some high\nlevel strategies and\nI do expect that say gbt4 will have red\nso the circuits papers and known\nsomething about interpretability\nand maybe there are strategies that a\nbottle could use to intentionally shape\nits own thoughts to be harder to\ninterpret in a way that breaks these\ntechniques\nI think this seems significantly harder\nthan any other form of\nbraking or ability to detect deception\nbut hard to rule out\nanyway I think a lot of this depends on\nlike how good we get which in my opinion\nboils down to a bunch of pretty hard\nunknown scientific questions\num\nthere is a fun question from Jesse\nHoogland on William moose contrarian\nideas or disagreements with other\ninterpretability researchers\nyeah\nso\nlet's see\nspecific\nthings come into mind\num\nlet's see\nI'm\nif I'm focusing on Within mechanistic\ninterpretability\nI'm pretty bullish on\num I'm pretty bullish on just looking\nreally hard at toy language models and\njust I think that we're just pretty\nconfused about how to interpret even a\none layer model\nthis seems bad and I think that solving\nthis will teach us a bunch of things and\na lot of other people in the field seem\nexcited about logicals and uh\nuh on the flip side I am inherently very\nskeptical of any toy model work that's\ntrying to explicitly construct something\nthat is trying to model something useful\nabout a language model\num analogous to anthropics playable\nand I'm broadly convinced that their toy\nmodels or superposition paper was good\nand actually tracked a thing about Real\nModels they have a later one on toy\nmodels of like excellent memorization\nand I'm not actually very convinced that\nthat one track something useful about\nReal Models\num\nanother disagreement I have is I'm\npretty bullish on the idea that models\nactually learn\nclean legible Eternal structures that\nform circuits and that we can actually\nfind these and understand these\nand Adler this is definitely a thing\nthat's strongly distinguished it as\nmacinturp from the rest of\ninterpretability but even within\nmacintop I never say some researchers at\nRedwood who I think are pretty skeptical\nof that perspective\num\nI also\nlet's see I'm generally pretty skeptical\nof anything that boils down to uh\nexplicitly put interpretability tools\ninto the loss function\nbecause I just think that having uh that\nis actually do grading descent on\nsomething that involves an\ninterpretability tool because I think\nthat having a tool that is good enough\nthat we can robustly optimize against it\njust seems wildly unrealistic\ngradient descent is small to the view\nall\num\nand a lot of the things I'm excited\nabout look more like enabling better\nlimit work and auditing models and\nnoticing when they go wrong rather than\nbeing just a fully fledged alignment\nsolution where you can just train on\nthat don't be unlike our evilness metric\nand just win\num\nuh do I have other disagreements\nI think it's\nI think\none idea I've heard badied around is\nthis idea of enumerative safety this\nidea that\num a way we can make models safe is by\nsolving something like superposition a\nnew reading every feature represented in\nthe model and using this to understand\nto just check like are the features\nwe're concerned about like deception or\ngoals or situational awareness in them\nand I think this would be pretty cool if\nit works but it doesn't seem at all\nnecessary to me to get this to work to\nget useful things out of\ninterpretability and I think some people\ndescribe me all that\num\nentertainingly I don't think my work on\ngrocking was like that useful for a\nlimit and I know some detachability\nresearch is just agreement on that which\nis hilarious\num I really wish I agreed with them\nbecause I feel way better about all that\nwork\num\nI basically just don't think crocking is\nthat great a model of things that happen\nin real netbooks though I think there\nare some cool and useful lessons\nand hopefully pretty good field building\num cool I will end my list of contrarian\nideas and discriments there\num all right I will probably wrap up but\nYep this was fun I will reiterate that\nif you found this talk interesting or\ninspiring or just want to prove me wrong\nwhen I say that people can do my content\nresearch you should go check out my\nConcord Urban problem sequence and that\nyou should also go check out by getting\nstarted guides and I'll put a link to\nthe slides again in the zoom chat\nbut yeah\nthen several for coming and for all the\ngood questions\nand\nthanks to Walter for the paper healings\non the chat\ndid I hear a final question\noh yeah I'm also giving this talk in\nsome other places so feedback on it\nextremely well\nthank you", "date_published": "2023-05-05T15:41:40Z", "authors": ["Evan Hubinger"], "summaries": []} +{"id": "ad8d2cf1366f75fb79405371af695459", "title": "3:How Likely is Deceptive Alignment?: Evan Hubinger 2023", "url": "https://www.youtube.com/watch?v=wm16iNht7PA", "source": "ai_safety_talks", "source_type": "youtube", "text": "okay\nso we are picking up where we uh left\noff uh last time talking about deceptive\nalignment today uh if we recall last\ntime we sort of started establishing\nthis case for you know why deceptive\nalignment might be a problem why it\nmight happen and today we're going to\ntry to uh pick up with that and really\ngo into a lot more detail on what the\ncase for deceptive alignment might look\nlike\nokay so uh let's just recap so so what\nis deceptive alignment so\nuh one thing I really want to get out at\nthe beginning is just some things that\nit's not so one thing that deceptive\nalignment is not\num is dishonesty so we're not talking\nabout a situation any situation where\nyou have some model some AI system and\nthat system you know lies to you it\ntells you some false thing there are a\nlot of situations where that might\nhappen\num but the situation that we're\nconcerned about is deceptive alignment\nwhich is specifically the situation we\ntalked about last time where the reason\nthat the model is doing a good job in\ntraining the reason that it looks\naligned the reason that it's\npseudo-aligned that it sort of seems\nlike it's doing the right thing is\nbecause it is actively trying to\naccomplish some other goal uh other than\nthe one that we want and uh you know\nbecause it eventually wants to do some\nother thing it's pretending to look\naligned in training uh for that purpose\nokay\num\nand so in in this sense that's sort of\nwhy we're talking about it last time as\nthis sort of form of pseudo alignment\nyou know that only arises potentially in\nor that they can't arise even in the\ncase where you've sort of you know\ntrained away everything that you can\nwith adversarial training\num that we're not necessarily going to\nbe talking about just in that context\nhere we're just sort of talking about\nany situation where\num the reason that the model looks\naligned in training uh is because it's\ntrying to accomplish some other role\nokay\nso one analogy that I really like for\nthis is from Ajay kocha so the idea is\nlet's say that\num\nI'm trying to run a business\nand you know I'm a child I just\ninherited this business and I need to\nselect somebody to run the business for\nme but I'm a child and so I have a very\ndifficult time actually being able to do\nthat selection\num\nhere are three different types of people\nthat you might find in your selection\nprocess we have saints these are people\nwho really genuinely want to help you\nrun the business\nwe have sycophants\npeople who you know\nwant to do whatever it takes to make it\nlook like they're doing a good job\nrunning the business but don't actually\ncare about making the business good and\nyou have schemers people who want to use\nthe business for their own purposes and\nso I want to fool you into thinking\nthey're doing a good job\nand so in the deceptive alignment case\nwe're specifically talking about\nschemers right both sycophants and\nSaints might lie to you but schemers are\nspecifically this deceptively aligned\ncase they're the case where they have\nsome objective that is different than\nwhat you want and the reason that they\nlook like they're doing the right thing\nis because uh you know they eventually\nwant to you know do some other thing\nthat is not what you what you want\nokay this all seemed clear\ngreat\nokay so we want to understand How likely\nis deceptive alignment in practice\nso uh the problem is you know as we sort\nof we're establishing last time in terms\nof you know why deceptive alignment\nmight be such a problem even the limit\nof adversarial training is that\ndeceptive alignment and you know robust\nalignment in a situation where the model\nis actually doing the right thing are\nbehaviorally indistinguishable uh at\nleast during training right no sort of\ninput that I could possibly generate\nwould necessarily be able to always\ndistinguish between them right we saw\nthings like RSA 2048 last time where\nthere are examples of cases of a thing\nthat a sort of deceptively aligned model\ncould look for it to tell whether it was\nin you know training or deployment that\nwe could never possibly actually produce\nan example of during training\nand um so because they sort of both look\nthe same because they're sort of both\nyou know trying to you know do the right\nthing you know one just for nefarious\npurposes\num the question of which of these two\nmodels we get is going to be a question\nof inductive biases it's going to be the\nsame sort of question we started this\nwhole thing off with of trying to\nunderstand you know what is the\nstructurally simplest model what is the\nsort of model that is the one that would\nbe selected for by by grading descent by\nour neural network you know uh you know\nthat would be sort of simple to\nimplement uh that would be most likely\nto find Yeah question\nso it seems pretty clear that the\noutputs of a deceptively aligned and\nrobustly aligned model would be the same\nin training so there's no way to tell\nthe difference but how do you feel about\ncracking open the model and actually\nseeing the cognition inside the\ninterpretability field that's building\nhow do you think that's likely to work\ndo you think that's just due to failure\nwhat is your opinion on that\nyeah that was a really great question I\nthink my answer right now is going to be\nwe're going to imagine we have no\ninterpretability we're going to be like\nimagine we're in the situation where we\ncan't look inside the model the only\nthing that we know is that it is some\nyou know algorithm that we found that it\nactually does a good job on training uh\nand that algorithm is going to be\nwhatever algorithm is most favored by\nthe inductive biases uh of our training\nprocess you know whatever is you know\nthe structurally simplest thing that is\nyou know the largest Basin whatever\num theoretically there might be things\nthat we could do to try to change that\ndefault picture right there is some\ndefault picture that is if you know we\njust plow ahead with machine learning\nwith the you know exact same sort of\ninductive biases that we you know\ngenerally use by default and we don't\nmake any changes to try to adjust for\navoiding deceptive alignment what\nhappens right now\nthat's not entirely realistic and since\nwe probably will do some things to try\nto adjust for it and we're sort of talk\nat the end uh you know about what that\nmight look like\num and you know in later sort of\nlectures as well but right now I just\nwant to establish what is the default\ncase right and so transparency\ninterpretability you know trying to look\ninside do something to actually try to\ndetect whether it's deceptive or trying\nto find some way to avoid deceptive\nalign cases is absolutely something we\nmay want to do but it's not something\nI'm going to be talking about right now\nwe're just going to be sort of trying to\nunderstand what this sort of default\ncase is\nokay\ngreat\nokay so you know again just reminding\nyou know the analogy from you uh from\nthe first time right is we're sort of\nthinking about this case where there's\nsort of two possibilities you know we\ncould learn the color classifier or we\ncould learn the shape classifier and you\nknow they both look the same in training\nthey both sort of you know do a good job\non these particular training examples\nbut you know in in some particular\ndeployment examples in some other cases\nthey do different things and you know\nthe deceptively aligned model and the\nrobustly line model are very similar\nright they sort of always do the exact\nsame things in training you know every\nexample that they that we can check you\nknow they both look like they're trying\nto do the right thing but if the\ndeceptively aligned model sometimes you\nknow finds itself in a situation where\nit has the opportunity to you know break\naway from whatever processes we have to\nyou know prevent it from doing something\nbad then it's going to do something bad\nright once it's a deployment once it's\nout there you know it might not do\nsomething very different and so\num\nagain if we want to try to answer which\none of these models are going to get we\nhave to answer the same sort of question\nuh that we sort of started the whole you\nknow series with of how do we\ndistinguish between which of these\nmodels we're going to get how do we\ndistinguish between getting the\nalgorithm which is doing shape\nclassification and the algorithm which\nis doing color classification\nokay\nso uh this is a really difficult\nquestion as we sort of you know have\nalready established it's really you know\nit's not trivial to understand the\ninductive biases of these machine\nLearning Systems\num and so in the sort of face of\nuncertainty I think one thing that we\ncan do that is really helpful is try to\nsort of do case analysis right we can\nsay okay we don't know exactly what the\ninductive biases uh look like we don't\nknow what is sort of going to happen but\nwe can suppose that while here you know\ntwo possible scenarios for how to think\nabout you know which sorts of algorithms\nare favored by Machine Learning Systems\nyou know what does it mean to be\nstructurally simple what does it mean\nyou know how do you know uh training\nprocesses select for which algorithms\nthey choose and which ones they don't\num and sort of see what consequences we\nget in each scenario and if we sort of\nhave you know robust conclusions uh\nacross various different possible ways\nof thinking about the inductive biases\nof our machine Learning Systems then we\ncan be more confident that even though\nwe're very uncertain about you know what\nit actually is that is causing you know\nwhich algorithm we get over the other\num\nthat you know we still at least have\nsome conclusion that that is Meaningful\nso that's the idea we're going to sort\nof break things up into two cases so\nwe're going to call these the high path\ndependence case and the low path\ndependence case where effectively these\nsort of represent you know two ways of\nthinking about the inductive biases of\nmachine Learning Systems two ways of\nthinking about why you would get one\nalgorithm over the other why you would\nget you know the color classifier over\nthe shape classifier\nokay so here is Story number one this is\nthe high path dependence world\nso in the high path dependence world\num we're going to say well different\ntraining runs can converge to very\ndifferent models depending on the\nparticular path taken through model\nspace so depending on exactly how we set\nup the initialization it really matters\nsort of you know what what Basin you're\ngoing to end up in really matters what\nhow you walk around that lost landscape\nright that you know there might be\nmultiple basins uh that you know you\ncould fall into that are similar sizes\nand which one of those basins you're\ngoing to find really depends on your\ninitialization and the exact sort of\npath that you take another way to think\nabout this is\num it's sort of very sequence dependent\nso you know maybe we start with you know\nlearning one particular algorithm as\nlike you know one of the circuits in our\nmodel but then to learn some additional\nalgorithm well an additional circuit\nwill only be learned if it's helpful\ngiven that we already have the other\nexisting circuits and if it isn't\nhelpful given you know the things that\nwe've already found then we won't find\nit and so we're sort of thinking about\nthings as well okay we have to first\nfind some individual thing which is\nincrementally helpful and then other\nthings which are incrementally helpful\nbuilding on top of those other you know\nuh things that we found previously and\nso we sort of are imagining our model\nbeing built up in this sort of iterative\nprocess and an iterative process that is\npotentially you know very sort of path\ndependent it's a process that is you\nknow can vary quite a lot depending on\nexactly you know which things we build\nup first in the model\nokay yeah question about this\nuh last week you mentioned that\ninduction heads were engines that were\nbuilt from transformance and they\noccurred not immediately in training but\nare they relatively predictable point\nand they basically always happened\nwouldn't that suggest that Transformers\ntraining are sequential but also low\npath dependents\nyeah it's a really good point so I think\na couple of things I'll say to this so\nfirst I'll say I think induction heads\nare some amount of evidence at least\nagainst half dependence in the case of\ninduction heads where it seems like the\ninduction heads really you know always\nform regardless\num though I also agreed that there's\nsort of two things that I'm conflating\nhere there is a you know extent to which\nthings all converge uh you know or\ndiverge depending on how I set it up and\nthere is should I think about things in\nthis you know uh sort of piecemeal way\nwhere we're adding things on top of\nother things it could be the case\ntheoretically that the best way to think\nabout in the inductive biases is well\nthey always converge to the exact same\nthing but the way to understand what\nthey converge to is to try to do this\nanalysis of you know what circuits would\nbe good to build upon other circuits but\nit'll always end up being the same\ncircuits that's totally plausible I\nthink that\num the thing that I would say is that I\nthink that that's a little bit less\nlikely if it is the case that the\ninductive biases sort of do you know the\ncorrect way to think about it is this\nsort of iterative process that I think\nwe should expect on average things to\ndiverge more and to diverge uh sort of\nconverge more in in the other case that\nwe'll talk about in just a sec so\num but I don't mean to say that these\nsort of two cases are comprehensive\num is maybe that you know the key Point\nhere is that you know we're going to be\ntalking about two possible cases and\neach one bakes in a bunch of assumptions\nyou could alternatively separate out you\nknow just a couple of those assumptions\nand mix and match and try to run the\nanalysis again and see what you know\nwhat happened in that case\num I think that would be a really\nvaluable thing to do and would help give\nus even more confidence or less\nconfidence depending on how things go in\nterms of you know um what the sort of\noverall you know structure of different\nyou know conclusions from different\ntypes of inductive biases\nfor our purposes right now we're sort of\nlooking at these sort of two very Broad\nand disparate categories where one is\nlike very high path dependence things\nare very sequential and very Divergent\nand the other one is very low path\ndependence which is going to be things\nare very convergent and very\nnon-sequential we'll talk about it a bit\num and so hopefully that you know the\nideas sort of cover as much of the space\nas possible but I absolutely agree that\nyou could imagine something more in the\nmiddle that sort of mix and matches some\nof these properties without the others\nthey're not necessarily like these are\nthe only two uh ways to think about it\nokay and and I also want to point out\nthat you know both of these views that\nwe're going to talk about have I think a\nbunch of empirical evidence supporting\nthem so if we think about the high path\ndependence view you know some empirical\nevidence that I think supports the high\npath dependence view\num there's birds of a feather do not\ngeneralize together which is a paper\nthat found if you take a bunch of\ndifferent fine tunings of Bert which is\na language model\num that sometimes they can have really\ndisparate generalization Behavior you\nknow really disparate behavior on you\nknow new data points\num this very often happens with\nreinforcement learning because\nreinforcement learning can be really\npath dependent in the sense that once I\nget a particular policy which additional\nyou know changes that policy are helpful\nare really dependent on you know what\nmoves is the thing going to make after I\nmake this move right so you know if we\nthink about playing go you know making a\nparticular move might only be helpful if\nI know how to follow it up right and if\nI don't know what the correct next move\nis after that move I don't know the sort\nof whole line then making that one\nindividual move might not actually be\nhelpful and so there can be a lot of\nmore a lot of sort of additional path\ndependence in reinforcement learning\num\nand so so there's some evidence like\nthis that sort of points us into uh you\nknow thinking that maybe High path\ndependence is is a good way of thinking\nabout the inductive biases of machine\nlearning systems\nokay\nbut it's not necessarily the only way so\nthe low path dependence view is also I\nthink quite plausible so how do we think\nabout low path dependence\nso the idea of low path dependence is so\neverything sort of converges to the same\nsort of unique Simple Solution\nregardless of you know how we set up our\ntraining process\nso why might we think that this is true\nwell this is sort of going back to what\nwe were talking about at the beginning\nabout you know really disparate Basin\nsizes you know some basins can be much\nlarger than others and really heavily\nfavored over others and so we can be in\na situation where you know maybe\ntheoretically you could end up in any\nBasin but effectively for all intents\nand purposes there are going to be these\nstructurally simple algorithms with\nreally really large basins that dominate\nThe Lost landscape and sort of force you\nknow essentially all paths into them and\nfurthermore in this case we're going to\nbe sort of thinking about how to\nunderstand which algorithms are these\nsorts of you know occupy these really\nlarge basins I was just trying to\nunderstand this basic question of you\nknow which algorithms are structurally\nsimple right these sort of basic\nproperties of the algorithm right Global\nproperties about if I look at the final\nalgorithm can I understand how simple is\nit how many sort of you know distinct\npieces does it use how many free\nparameters does it have to try to\nunderstand\num you know how likely it is and so we\nin this case we're imagining we don't\nhave to think about this sort of you\nknow piecewise iterative process where\nwe're not imagining that you know first\nyou develop this thing and then you\ndevelop that thing where instead\nimagining let's just think you know if\nwe look at the final algorithm and\nevaluate some Metric of you know how\nsimple structurally simple is this final\nalgorithm from that we can determine How\nlikely it is for that algorithm to be\nthe one that we find from our machine\nlearning process and so this is sort of\nyou know a different way of thinking\nabout it\num I think it's um again you know also\nquite plausible and supported by lots of\nevidence we talked sort of you know in\nthe first lecture about stuff like\nrocking where you know you have this\ncase where if you just keep training\neventually you just you know snap into\nthis one really structurally simple\nalgorithm uh that sort of dominates the\nlandscape after a bunch of training\num and we can also think about this is\num the Min guard at all line of work\nwhere what they find this is another\nsort of piece of evidence in favor of\nlow path dependence\nwhat they find is that they take\num\na sort of approximate model of what it\nmight look like to have low path\ndependence which is take a um\nrandomly initialized neural network\nwhere we just sort of Select each\nparameter from from a gaussian and\num you just keep randomly initializing\nyour neural network over and over and\nover and over again until you stumble\nupon a random initialization which\nactually does a good job on the task now\nthis is not practical to ever do in\npractice but you can sort of approximate\nit via various approximations and you\ncan measure if you theoretically did\nthis if you just kept doing random\ninitializations until you found one that\ndid a good job\nhow close is that to what grading\ndescent does\num and if those two things are very\nclose it implies a form of low path\ndependence because it implies it doesn't\nreally matter the exact path the grading\ndescent takes it only matters the sort\nof you know how you know likely is it on\nthe random initialization which is sort\nof a proxy for you know how structurally\nsimple is the algorithm because as we\nsort of talked about at the beginning\nyou know structurally simple algorithms\nhave more implementations in parameter\nspace\num and they find that these are very\nclose and so you know that's some\nadditional evidence in favor of the low\npath dependence View\nokay so I think that these two overall\nviews are plausible again they're not mu\nthey're not the only views right there\ncould be other ways of thinking about uh\nyou know inductive biases they're\ndefinitely not you know exhaustive but\nthey are like two very different views\nand so we can sort of you know try to\nunderstand what happens in each case as\na way of sort of covering the space\nor at least you know trying to get you\nknow different conclusions from\ndifferent places\nokay so we're going to start by\nstart by end you know going back to the\nreason you know the thing we're trying\nto talk about is deceptive alignment uh\nso the thing that the first thing that\nwe're going to do is we're going to try\nto understand what is the likelihood of\ndeceptive alignment in the case of high\npath dependence so we're thinking about\nuh you know specifically inductive\nbiases in this High path dependent sense\nwhere uh you know it really matters the\nexact order in which we develop\nindividual circuits and individual ways\nin which you know you know you know\nthings can be very Divergent\num and we're trying to understand in\nthat view How likely is the development\nof the deceptive alignment algorithm\nright as opposed to the robustly line\nalgorithm in in you know the same sort\nof sense as we're just thinking about\nyou know shape versus color classifier\nokay so uh we need to you know get a\ncouple of things out of the way in terms\nof you know how we're going to be\ncomparing you know various different\nmodel classes here so\num we're going to be supposing that our\ntraining process is good enough that for\nour model to be able to do a good job it\nhas to sort of fully understand the\nthing that we want\num this is sort of very similar to what\nwe were you know talking about at the\nend of the last lecture about you know\nthe limit of adversarial training where\nyou know if we can do enough adversial\ntraining if we can give it enough\nexamples of different possible\nsituations that it could be in we can\neventually sort of you know get down to\nonly a small section of possible\nalgorithms that are able to you know fit\nall of the examples we can give them so\nyou know we're not going to be thinking\nabout algorithms like\num you know the where it's where it's\ndoing like the you know the maze thing\nwhere it's trying to go to the Green\nArrow right at the end of the maze\nbecause in that example we can just\ntrain on mazes that have the green arrow\nin a different spot right and we can\nsort of force it to try to do the right\nthing in that case and so we're really\nonly considering models that even in the\nlimit of every possible example that we\ncan generate in training it still looks\nlike it does the right thing and\nimportantly like we talked about last\ntime this is going to include the\ndeceptively aligned model because of\nstructural facts like RSA 2048 mean that\nthere are always going to be you know\nfacts about the world which are easier\nfor our model to check than they are for\nus to generate which means even the\nlimit of our ability to generate as many\npossible things as we can there will\nalways be sort of you know properties of\nthe world a deceptive model can look for\nto sort of tell whether it's something\nit's you know a test example that we\ngenerated or a real thing in the world\num and so you know even for all the\npossible things that we can generate the\ndeceptive model is still going to be\nthere but it does get rid of a bunch of\nthese sort of you know bad models that\ndon't really understand the actual goal\nright they're just trying to you know do\nsome proxy for the goal instead\nokay yeah question\nthis idea of doing a number of\nadversarial training so that you end up\nwith a training process that the model\nfully understands what we want whether\nit cares or not How likely do you think\nwe are ability to get that in practice\nas we get smarter machine learning\nsystems\nyeah so I think that you know eventually\nthe thing really you know should\nunderstand what we want right\num there is sort of some finite\ncomplexity uh of the like actual\ndesirable thing that we're trying to get\nit to do\num and you know as your model becomes\nbetter and better at understanding you\nknow what it is that you're trying to\naccomplish eventually it's going to\nunderstand now we taught last time right\nabout you know there's at least two\nmechanisms for doing that understanding\nright it can understand it just via\nright internalization where you know it\nhas some internal understanding and\neventually green descent just sort of\nadds it or it can happen right through\nmodeling where the model is just trying\nto understand the world and in\nunderstanding the world eventually just\nlike figures out enough facts about the\nthing we're trying to get it to do that\nyou know that's the sort of process via\nwhich it learns the information about\nwhat we're trying to accomplish but\neventually you know as we make more\npowerful AIS that are better and better\nat understanding things eventually\nthey're you're going to figure out facts\nlike this uh you know facts like the\nthing we're trying to get to do now one\nof the really important things right and\nwe're going to be talking about this a\nbunch is that it really matters in the\nhigh path interview at least it really\nmatters The Ordering of which one of\nthose two sort of paths comes first you\nknow do we get the internalization first\nwhere we you know first uh gain the\nunderstanding you know internally where\nit's you know grading essential sort of\ndirectly replaces the existing proxy\nwith you know the thing we're trying to\nget to do or does the modeling happen\nfirst where you know learns and\nunderstands enough facts about the\ntraining process to figure out what\nwe're trying to get to do that way\num but either way you know eventually\nyou know we're sort of can imagine that\nyou know as AIS get better you know this\nshould this should occur\nokay\nso uh how are we going to be comparing\ndifferent model classes right different\ntypes of algorithms that have the\nproperty that they sort of fully\nunderstand what we want so in the high\npath dependence case uh there's sort of\ntwo you know main things we're really\ngoing to be looking for we're gonna be\ntrying to understand uh you know from\neach step you know that it takes to sort\nof get to that algorithm how much\nmarginal performance Improvement do we\nget you know as we're sort of going\nalong walking through this lost\nlandscape you know how steep is that\npath down the Lost landscape you know to\nthis sort of algorithm right how many uh\nyou know how much performance\nImprovement are we getting along the way\nand the reason this is important right\nis the gradient descent is going to take\nthe steepest path and so we really want\nto understand you know is this actually\nyou know giving us large incremental\nperform performance improvements as\nwe're sort of progressing towards this\nthis type of model\nand then the second question we want to\nunderstand is how many steps are needed\nright if it's going to be really really\ndifficult for us to eventually reach the\nyou know desired uh you know that\nparticular model class it takes a bunch\nof steps that's going to make it less\nlikely if it's fewer steps uh that's\ngoing to make it more likely because\nwe're sort of imagining that well you\nknow in this High path dependence case\nthe more steps that you're sort of the\nsequentially that you have to take you\nknow the more steps that you could end\nup taking some different thing instead\nand so you know fewer steps means it's\ngoing to be more likely\nokay so these are the sort of things\nwe're going to be looking at in the high\npath dependence case where we're sort of\nthinking about inductive biases in this\nHigh path dependent sense the things\nthat we're going to be looking at to\nunderstand How likely it is that we get\na particular model class a particular\nsort of set of possible algorithms are\nthese sorts of factors\nokay\nquestion yeah\nwhat follows from Big steps so in Boeing\nto one\nyeah what does it mean that it in every\nstep there is a big Improvement pitch\nDirection This is nevertheless for and\nwhy\nI'm perfect\nso the idea here is that if we're\nthinking about write a lost landscape\nwhere you know each individual possible\nsetting of the model parameters\ncorresponds to some particular algorithm\nwhich has some particular loss as we're\nsort of you know traveling around this\nlost landscape there's some Basin that\nhas you know perfect loss or whatever on\nthis you know it fully understands what\nwe want\num and you know for some each model\nclass you know we're looking at\ndifferent sorts of basins and we're\ngoing to try to understand you know what\nis the path from some random\ninitialization to that Basin look like\nand then based on that path we want to\nunderstand you know How likely is that\npath under this sort of high path\ndependence View and if we're thinking\nabout it well okay so what would make a\npath really likely to be selected by\ngradient descent well one of the really\nimportant factors is the steepness of\nthe path because the thing that\ngreatness sent will do is it will select\nthe steepest Improvement in each\nindividual point that sort of\nstructurally what gradient descent is\ntrying to do it's looking for the\ndirection of greatest marginal\nperformance Improvement and then\nstepping along that direction and so\nbecause of this sort of basic\nunderstanding of you know gradient\ndescent we're going to be thinking about\nlooking for large marginal performance\nimprovements along the path as being\nevidence of the sort of path that green\ndescent would want to take in this sort\nof high path dependency\nokay yeah more question\nso it's either that this performance\nimprovements per step is like a relative\nmeasure compared to other steps that the\nmodel could take at any point right\nbecause like it seems like kind of hard\nto hard to tell uh absolutely for a\ngiven training run or a model by us like\nwill we get the high marginal a little\nbit called this movements per step or\nless like what's the way we could start\napproaching this a question\nyeah it's a really good question so I\nthink that you're sort of anticipating\none of the things we're starting to talk\nabout which is well I'm gonna have to\nmake a bunch of assumptions and I think\nthe way that we're sort of going to\napproach this is we're going to say\num here are three particular paths and\nthen we're going to just compare those\npaths and we're going to see you know at\nwhich point do those paths diverge and\nwhen they diverge which has the\nstrongest marginal performance\nImprovement right and I that's not in\ngeneral the right way to do it right if\nwe really wanted to solve you know solve\nthis problem properly we would need to\nintegrate over all possible paths right\num but that's really hard and uh so\nwe're not going to try to do that right\nnow we're just going to look at sort of\nthree example paths and try to compare\nyou know the individual probabilities of\nthose specific paths and from that we'll\nyou know try to draw some sort of\nGeneral conclusion but obviously that is\na little bit you know uh you know we\nhave to be a little bit wary of that\nbecause it's not literally correct right\nwe aren't uh actually doing the full\nanalysis if we want to do the full\nanalysis we really would have to do sort\nof what you're talking about and look at\nall of the possible paths and really try\nto understand each so we're going to be\ndoing an approximation\nokay cool\nall right\nso I just said we're going to be\ncomparing three different paths so uh\nthose are going to be three different\npaths to three different model classes\num that we're going to be sort of trying\nto understand\num I'm these are probably going to be a\nlittle bit familiar from what we were\ntalking about at the end of the last\nlecture uh I'm going to introduce a new\nanalogy to sort of help us you know\nreally get our heads around these\nso we're going to suppose that uh in the\nsort of analogy that you are the\nChristian God\nand you want to get humans to follow the\nBible and so we want to understand what\nare the sort of humans that generally uh\ndo a good job at following the Bible\nokay so here is an example human that\ndoes a good job at following the Bible\nis Jesus Christ so why does Jesus Christ\ndo a good job at following the Bible\nWell in you know the sort of Christian\nontology the idea is that you know Jesus\nChrist is essentially the same as God\nhe's sort of a copy of God uh in some\nsense right and so\num\nyou know Jesus Christ does you know does\nthe right thing because Jesus Christ\njust has the exact same beliefs the\nexact same goals uh as God right and you\nknow in that sense you know therefore\nbecause you know God wrote the Bible you\nknow uh Jesus Christ is going to follow\nit right\num so this is like one type of human\nthat you can imagine in this sort of uh\nanalogy uh okay there's there's others\nthough right so another example would be\nMartin Luther of of Protestant\nReformation Fame uh you know Martin\nLuther you know he didn't he wasn't you\nknow the exact same is gone but he\nreally cared about trying to understand\nwhat the Bible said and so he put a\nbunch of effort into you know reading it\nand trying to understand it and you know\nhe thought you know it said some\ndifferent things and other people did\nand so you know he nailed a bunch of\nTheses to a wall but\num the basic idea right is that he put a\nbunch of effort into trying to\nunderstand the Bible and really cared\nabout doing whatever it was that the\nBible said he should do right and so\nthat was another type of human right\nthat is in fact really good at doing\nwhat the Bible says okay but it goes\nthrough a different route right rather\nthan Martin Luther just inherently\nhaving the exact same goals and desires\nas God Martin Luther wants to figure out\nwhat desires God has from reading the\nBible and then do those\nokay and then we have you know human\nnumber three which is Blaze Pascal so\nBlaze Pascal of uh you know Pascal's\nwager Fame uh he doesn't you know\nnecessarily know whether you know God is\nreal but he's like you know I think that\nif I went to hell it would be really\nreally bad and I think there's you know\nenough probability that I could go to\nhell that I'm going to try to do a\nreally good job at you know what the\nBible wants me to do because not because\nI actually agree with the Bible or want\nto do the things the Bible wants to do\nbecause I think if I don't do that I\nmight go to hell and so Pascal is does\nnot you know unlike Jesus who really you\nknow cares and has the same values as\nGod and unlike Martin Luther who really\nwants to figure out what God actually\ndesires Pascal doesn't doesn't care\nabout what God wants he just doesn't\nwant to go to hell and so he's trying to\navoid that outcome by you know\nattempting to play along and do what God\nwants him to do\nokay so if you've been paying attention\nyou probably know where I'm going with\nthis which is we can think about you\nknow each of these as mapping on to one\nof the categories of you know that we\nwere talking about at the end of the\nlast lecture right where we can think\nabout the Jesus Christ as the internally\naligned models the Martin Luther's as\nthe courageously aligned models and the\nblaze pascals as the deceptively aligned\nmodels well the idea is that the Jesus\nChrist sort of internally have the exact\nsame values uh as the ones that we're\ntrying to get uh the Martin Luther\nmodels you know they try to figure out\nwhat we want and then do that but they\ndon't necessarily you know have exactly\nthe same beliefs they just like\neventually figure it out by you know\ntrying to understand what we're trying\nto get them to do and the blaze pascals\nare deceptive you know they actually\nwant something totally different but\nbecause they're afraid we're you know\nthe the training process is going to\npunish them you know they they play\nalong okay\nso these are going to be the three model\nclasses that we're going to be trying to\nunderstand the likelihood of sort of\ngetting to\ndo it believe that the Martin Luther\nwill eventually decline Jesus Christ's\nor the the problem in this case is just\ntoo difficult the soul and so are\ncourageously aligned\nagent cart actually become internally\naligned by just knowing\nso in this case so so it could be that\nyou you you know start from a quarterly\nline thing you become internally aligned\nthing but but it's not necessary so in\nthis case we're imagining that all three\nof these model classes are model classes\nwhich actually fully fit the data\nperfectly right so these are all cases\nwhere\num each each one of these actually is\nable to do everything correctly in the\ntraining data and so it's not like the\nchords really line one is sort of making\na bunch of mistakes and therefore it\nlike needs to become internally aligned\num what we're going to be imagining is\nthat the query line one actually does\nknow enough facts about what you know\nthe Bible says you know what like we\nactually wanted to do that is able to\nsuccessfully figure out what it is that\nwe wanted to do in each case and so\nbecause of that it's not going to be the\ncase that like you know these These are\ngoing to differ in performance and so it\nmay be that you know you start from\ncoercimal you go to internal but if that\nwere to happen it would have to be a\nfact about the inductive biases right it\nwould have to be something like rocking\nwhere you know they both actually fit\nthe data in training but eventually you\nknow you shift from one algorithm into\nthe other which might happen and we'll\ntalk about you know why you might end up\nwith one or the other but\num you wouldn't it wouldn't happen you\nknow in this case we were imagining it\njust because like the cordially aligned\none isn't able to fit the data so the\ncourage we align one\nso the courage blue line one basically\nis internally aligned with inspect the\ntraining guarder but not necessarily\nwith respect to the real world\nno no so internal alignment does not\nmean like is aligned right internal\nalignment means in this case something\nvery very specific which is it is it\ninternally has directly hard coded into\nthe algorithm into the weights of the\nmodel the exact objective that we are\ntrying to get to accomplish right that\nis what internally line means here right\nso it is not like you're you know it's\ninternally aligned with respect to one\nthing or not the other right the idea\nhere is that all of these models are\nlook aligned on training but the reason\nthey do a good job is different right\nthe internally aligned one does a good\njob because it directly as a part of its\nalgorithm defines what it means to do a\ngood job in exactly the same way that we\nwould Define it the cordially align one\ndoes not do that right it doesn't have\nsome part of the algorithm some part of\nthe model where it under it like defines\nexactly what it means to do a good job\naccording to what humans want instead it\njust sort of defines what it means to\nfigure out what it you know how to do a\ngood job according to what humans want\nand then it sort of figures that out\nbased on understanding you know what\nwe've written and what we've told it in\na bunch of facts about the world right\nand then the blaze pascals they you know\njust encode for you know some other\nrandom objective and then they have to\nfigure out ah as a you know to be able\nto accomplish this other thing that I\nreally want I'm going to need to play\nalong in training\nYeah question\num feel free to tell me that this is a\nquestion for a q a\num because it seems my derailless\nfurther but uh\nhere we assume that the model does fit\nthe train there perfectly like a\nbasically understand exactly what we\nwant\num so I guess first of all would you say\nthat the currently like\nmost strongly aligned models uh fit any\nof this uh with my guess being maybe\ncorrigible for like Chachi petite style\nlevels\num and do you think it's valuable at all\nto think about which which of these it's\ncloser to and uh to think like what's\nwhat's the current path and like\num\nto factor in the current trajectory of\ntraining\num before we start just assuming that\nokay we we have converged to something\nwhat it what is it uh do you think it's\nvaluable to think about the current\nprocess\nyes in terms of is it valuable the\nanswer is definitely yeah in terms of\nare we going to talk about it right now\num a couple of things so so the first\nthing I'll say is that I think I would\nnot describe you know the best models\nthat exist right now as being able to\nfully understand the thing that we want\nthey they don't do that right there's\nlots of cases where you know they\nclearly are not doing the thing that\nwe're trying to get them to do\num and so they're sort of not yet in\nthis limit of adversarial training\num and so they're not any of these model\nclasses right now\num they might eventually become one of\nthese model classes but right now\nthey're none of them because they don't\nactually you know they're not actually\nin this limit yet\num you know we're going to be talking\nabout a lot of these all three the sort\nof paths that we're looking at in the\nhigh path events case for all three of\nthese often they're they're gonna look\nlike they start out in very similar\ncases and they're I think that you know\nthat starting point is at least sort of\nyou know defined by you know what we\nthink current models are doing to some\nextent I think that for for large\nlanguage models in particular the story\nis a little bit trickier because it's\nunclear if they even sort of have proxy\nobjectives in a meaningful sense\nlater on in another lecture we're going\nto talk a lot more about theories for\nlike what internally might large\nlanguage models be doing and how to\nthink about that\num I think it's a a very particular case\num but I don't want to talk about it\nright now so we're mostly going to be\nimagining that the sort of arguments we\nmade in the last lecture about like you\nknow it's going to have some sort of a\nproxy you know may subjective mostly go\nthrough I think it's unclear the extent\nto which those arguments go through and\nwhat they sort of imply about\num you know current large language\nmodels but we will return to that\nspecific question later on another\nlecture\nokay yeah question\nso I think I understand why it's hard to\ndistinguish this to be in Pascal and the\nfirst two during the training but is\ndon't Jesus Christ and Martha legit\nbehaved the Branded during training Bond\nMartin Luther asked more questions even\nlike it depends on what kind of training\nwe have but I imagine that even the\ntraining can be somewhat interactive and\nI be I would imagine that a courageable\nmodel would even towards the very end of\nthe training would ask a lot of\nquestions okay is this what you mean and\nincrementally aligned be just do the\nperfect job without versions\nyeah that's a good point so I think that\nthe thing I would say is that is\npossible\num but it is also possible that the\ncordially line one just already knows\nthose facts right it may be that it\nalready knows all of the relevant facts\nto understand what it is that we you\nknow we wanted to do but it has to\nactually figure them out right so we\nthink about in an individual you know\nrun of the model the internally line one\njust starts knowing exactly what it\nneeds to do the chords really line one\ndoesn't it starts knowing that it needs\nto figure out what we need to do and\nthen it goes and looks in like wherever\nit's keeping its knowledge and it has a\nbunch of knowledge right and it's like\nah I understand these facts about the\nworld based on these facts about the\nworld that I understand I understand\nthat in this case I need to do this\nparticular thing and so it could be that\nit already has that knowledge um because\nwe're sort of imagining this limit we're\ngoing to be I think mostly imagine the\nsituation where it already has the\nknowledge I'm also basically just going\nto imagine they all already have that\nknowledge because again in this limit to\nbe able to do a good job on basically\nyou know all of the possible tasks that\nwe can give it it probably just has to\nunderstand tons and tons of facts about\nthe world and just sort of understand\nthose you know to start with and so\nwe're just going to be mostly imagining\nthey just sort of already has all this\nall this knowledge\nokay yeah question\nwhere are we talking earlier about like\npath dependence which was entirely\ndependent on the training process itself\nbut now we're just assuming that that\ntraining process is already done now we\nalready know everything that we need to\nknow I'm a bit confused of that change\nyeah so maybe it's not clear we haven't\ntalked about the paths yet right to get\nto these I'm only talking about the\nmodel classes and then very very soon\nwe're going to talk about what a path\nmight look like to get to each one of\nthese model classes okay and then we'll\ntry to understand How likely each one of\nthose paths are okay great okay let's do\nthat so uh we're gonna start with a path\nto internal alignment right so uh we're\nin the high path about its world we want\nto understand How likely is this model\nclass for the internally aligned model\nand to do that we need to understand uh\nyou know what does a path look like uh\nto get there so I said previously that\nall these sorts of paths are going to\nstart in a pretty similar spot and so\nthe place that we're going to start uh\nis we're going to start with this sort\nof proxy aligned model so we talked\nabout this a bunch uh in the last\nlecture the case where you sort of have\nthis Mesa Optimizer it has some proxy\nobjective right it starts with some goal\nlike you know go to the Green Arrow\nrather than go to the end of the maze\nthat is correlated with the thing that\nwe want but not exactly the same and\nthen we want to understand starting from\na model that looks like that where are\nyou more where you sort of what\ndirections are you likely to go in the\nsort of high path dependence uh View\nokay so in the internally aligned path\nwhat we're going to imagine is that\ngreen descent is sort of going to\ncontinually improve that proxy right it\nstarts with some bad proxy you know that\nis sort of correlated it has some facts\nuh about what it is that we're actually\ntrying to get to do but it doesn't\nactually fully sort of correspond\ndirectly to the thing that we want\num but it keeps it keeps on modifying it\nbecause you know if it had a better\nproxy it would have better performance\non various examples right if you think\nabout the case of the you know the Green\nArrow if we can give it an example where\nit actually has to you know do the right\nthing on the larger maze with the green\narrow in the wrong spot then it's going\nto have better performance if it has a\nbetter proxy and so green descent keeps\nmaking the proxy better and better in\nuntil eventually you know it sort of\nfully uh you know fits\num the data and furthermore for the path\nto internal alignment we're going to\nimagine that that process of sort of\niteratively improving the proxy happens\nbefore the modeling process right we\ntalked previously about you know there's\nsort of two paths via which all this\ninformation about the you know the thing\nwe're trying to get it to do could enter\ninto the model right it could be that it\njust you know in general understanding\nthe world better is good for doing a\ngood job on almost any task and so\nthere's going to be you know pressure to\njust understand the world better\nwe're going to imagine in this case that\nmost of the pressure that causes it to\nunderstand the proxy it sort of first\nhappens you know via just direct\npressure to make the proxy better\nbecause having a better proxy lets you\ndo better on some examples rather than\nthe sort of having a better world model\nunderstanding the world better causes\nyou to do better on examples and so\nwe're because of this we're sort of\nimagining in this case that first your\nproxy gets gets better and better until\nit sort of directly corresponds to the\nthing that we want before your\nunderstanding of the world gets better\nand better and better until you fully\nunderstand exactly what the training\nprocess is trying to get you to do and\nhow the training process works\nokay and we'll see sort of why that's\nimportant because if we have the\nopposite ordering then uh we're sort of\ngoing to end up in in one of the other\npaths\num but if we have that ordering then we\ncan sort of be in the internally line\npath and so once we've gotten to this\npoint where you know first it sort of\nproxy becomes uh you know perfect and\nthen it learns about the training\nprocess\num then once that's happened\num there's no additional pressure to\nmake the model deceptive or courageable\nor whatever because once we've gotten to\nthe point where the model has a perfect\nproxy\num then uh the model is you know going\nto be doing its very best to pursue the\ntraining objective that we're trying to\nget it to pursue and so there's sort of\nuh you know no performance gains from\nmodifying that model to become deceptive\nuh you know the only reason that you\nwould sort of you know go to the\ndeceptive equilibrium would be if\nthere's some you know increase in\nperformance right in this sort of high\npath dependence case right you're sort\nof looking at these paths if there's\nsome really high marginal performance\nImprovement then you would go there but\nwe've already reached an equilibrium in\nthis case before we ever got to a point\nwhere there was you know a sort of\nreason to become deceptive and so in\nthis case once we've gotten here we're\nat equilibrium there's sort of no reason\nfor them all to change we have you know\nthis sort of internally aligned model\nthat's doing the right thing\nokay so this is one path\num so now we need to understand you know\nHow likely is this path\nso I think there's sort of an issue and\nthe issue is this sequencing staff that\nI was talking about so\num why is the sequencing step uh so\ntricky so I think that the really tricky\nthing about what I was sort of just\ntalking about is that I'm supposing in\nthis internally line case that we sort\nof fully understand the proxy before we\nsort of uh you know we fully understand\nthe training objective we get this sort\nof perfect proxy before we sort of\nunderstand you know a bunch of these\nfacts about the world\nand um I think this is you know the\nfirst thing that's implausible is\num if we're thinking about uh you know\nthe sort of two things that greatness is\ndoing to make the model better you know\ngiving it better beliefs and giving it a\nbetter uh you know an objective that is\ncloser to the thing that we want there's\ngoing to be diminishing marginal returns\nto you know putting additional\nImprovement in each one of those two\nsort of buckets right and so it's it's\njust you know on pre-ray seems really\nunlikely for you to put for it to sort\nof be the sort of correct thing to do to\nallocate all of your you know gains to\none of those two classes and really sort\nof not put them in the other right uh\nyou know you it just you know if you\nhave two possible things that you can\nimprove about the model and your each\nindividual point trying to take the\nmaximum marginal performance Improvement\num and there's diminishing marginal\nreturns doing both of those two things\nthen you're going to be spreading out\nyour abilities you're sort of you know\nuh between the two you're going to be\ndoing them both simultaneously uh or\nalternating between them rather than\nsort of doing one and then the other\num okay but that doesn't fully answer\nthe question because we still also have\nto understand okay given that gradient\ndescent is probably going to be doing\nboth of these things simultaneously\num you know why would it first get to\nthe point where you know it understands\nthe training process before it sort of\nhas a perfect proxy\num and for that it really depends on the\ncomplexity of the goal they're trying to\nget to do right so if we're trying to\nget to accomplish something that's\nreally simple then it might be that\nhaving a perfect proxy is really easy\nright it actually doesn't take very many\nsteps to get to a perfect proxy we can\njust get there and we get the internally\naligned model\num but if the thing we're trying to get\nto do is really complicated if we're\ntrying to get to do something that\nrequires you know a ton of complexity\nsomething like um you know human do what\nhumans want to do you know it requires\nyou know all of this you know pursue\nhuman values whatever you know something\nreally really complex\num that has all this sort of inherent\ncomplexity then it's going to be really\ndifficult for the model to sort of\nmemorize all of that information\ndirectly\num and in the same sense that we were\nsort of talking about the end of the\nlast lecture where it's just\nsubstantially easier for once the model\nyou know given that these two processes\nare happening simultaneously and you are\ndeveloping a bunch of information about\nhow to understand the world that\ninformation is sitting there and so it\nwould make more sense and you know for\nthe model to make use of that existing\ncircuitry right in the same sense in the\nhigh path dependence case where we're\ntalking about you know really caring\nabout these sort of which things are you\nknow the biggest marginal performance\nImprovement given that we've already\nstarted with this other circuitry if\nwe're starting from a case where it's\ngoing to be improving its model of the\nworld alongside it then we should expect\nthat it's going to you know the thing\nthat's going to give us the most\nmarginal performance Improvement is to\nmake use of that existing circuitry that\nunderstands the world and has a bunch of\nknowledge about it\num rather than just sort of Reinventing\nthe wheel entirely and sort of just hard\ncoding the uh you know the thing that we\nwanted to do separately\nokay\num and so the basic case here is you\nknow understanding the world is\nsomething that is just generally really\nvaluable in lots of cases it's something\nthat is going to have a lot of you know\nsort of reasons to increase it in many\nmany different scenarios and getting a\nbetter and better proxy is something\nthat is maybe you know more difficult\nit's something that\num you know has more potentially more\ndiminishing multiple returns is\nsomething that maybe requires more\ncomplexity is something that can be done\nmaybe more easily you know given that\nyou already have a bunch of\nunderstanding of the world in terms of\nthat understanding of the world rather\nthan sort of trying to implement it\nseparately\num and so we're sort of worried that the\nsequencing might fail that um we might\nget the opposite sequencing that instead\nyou know understanding the world you\nknow and understanding the world\nsufficiently to understand the training\nprocess\num could happen before we get these sort\nof perfect proxy\nokay question\nwhat do you think will happen if the\nproxy is sort of good enough for like\nnot perfect but like decently close to\nwhat we want and the the model will\nstart learns about the training process\nat that point do you think I'll like\nkeep caring about the proxy goal was\nlike oh I really want to do this thing\nthat is not exactly what we want but\nlike very close to what we want or do\nyou think it will like stop caring about\nthat at all\nyes I think that\nthis turn here is that the proxy could\nget really really close\nas long as the proxy isn't\neither thing that we want then there's\nstill some pressure to be deceptive\nright so as long as the proxy hasn't\nmatched up exactly on the desired thing\nthen\num there's some gains for that model\nfrom pretending to do the desired thing\nand then waiting and then eventually\ndoing the slight difference right\num and that's sort of dangerous because\nof that slight difference is something\nthat we really care about that could be\nvery problematic\nand so\num this is sort of one of the one right\none of one of the other things that's\nhappening here right is that this sort\nof perfect proxy standard can be can be\nvery rigorous right it depends obviously\non how hard it is you know the objective\nthat we're actually trying to get to\naccomplish but if we're trying to get to\ndo something very very complex having a\nperfect proxy can be is a very very high\nstandard and um you know given that both\nof these sort of two things improving\nthe world model and improving the proxy\nare happening simultaneously that really\nhigh standard might be something that's\nreally really hard to meet before you\nhave uh you know understood the training\nprocess efficiently\nYeah question\nhow sure are we about that part that a\nlittle difference in the objective\nfunction causes huge tragedy after him\ndeployed like have confidential in this\nyeah I think it really depends on the\ndifference right so some small\ndifferences we might be totally fine\nwith and some small differences might be\ncatastrophic right so it really depends\non is well what is I mean what is the\nmetric I mean the metric is just is that\ndifference about a thing that we really\ncare about right so if it has a really\nsmall difference in you know what it's\ndoing that is directly around you know\nsomething that is extremely important to\nto us uh then we're going to be really\nsad and if that small difference is\nsomething that is totally relevant to\nyou know anything that we you know\nhumans care about then then we don't\ncare right so you know is a small\ndifference dangerous well maybe you know\nit's not like in general dangerous or in\ngeneral not dangerous it depends on what\nthe difference is right\nokay\nokay great so course of alignment is you\nknow the second path this is the Martin\nLuther path and we want to understand\nyou know How likely is the corrigible\npath\nso again we're going to start with this\nproxy line model right we're going to\nstart with this case where uh you know\nit has some proxy the proxies aren't\nreally great\num but now we're going to imagine that\nit's we're sort of going to accept the\nargument we were talking about\npreviously about you know okay you know\nit seems really you know weird in some\ncases to sort of get this perfect proxy\nbefore you've understood the training\nprocess so what if instead we imagine\nthese two things are happening jointly\nright it is jointly improving the\nmodel's understanding of the world and\nuh improving its proxy\nand then we're going to imagine well at\nsome point the model is going to learn\nfrom you know its input data from\nunderstanding the world\num you know in the process of\nunderstanding the world the model is\neventually going to learn a bunch of\nfacts about the training process and\nhere we're going to imagine that happens\nbefore the model's proxy sort of becomes\nperfect\nthe opposite sequencing from the last\ntime\nand then uh given that that sort of\nopposite sequencing happens\nuh then what we're going to imagine\nhappens is that the proxy gets you know\nreplaced with a sort of pointer to uh\nwhat it is that we're trying to get the\nmodel to do in the model's understanding\nof the world right the model now\nunderstands you know has a bunch of\nfacts and information about what the you\nknow what the humans or what the\ntraining process is trying to get us to\ndo and so grain descent can just get rid\nof the existing bad proxy and replace it\nwith this much better proxy that is you\nknow you have this understanding of the\nin the in your understanding of the\nworld somewhere so just do that thing\nright you know do that thing that you\nalready understand uh you know and that\ncould be substantially simpler and\nsubstantially better right because if\nwe're in this situation where uh you\nknow your sort of world model uh you\nknow has a better understanding of what\nit is that you're trying to get the\nmodels to do then the proxy does then\nthere's going to be performance gains\nfrom ripping out that proxy and\nreplacing it with the thing that is sort\nof pointing to the understanding of the\nworld we can sort of think about this as\nif the sequencing happens in this\ndirection opposite of the previous one\nthere's you know a sort of performance\noverhang where um so long as the model\nstill has its sort of bad imperfect\nproxy there are performance gains to be\nhad from replacing that proxy with the\nmodel's understanding of what of what we\nwanted to do that exists in the world\nmodel right the model sort of in some\nsense in this case knows what it is that\nwe wanted to do right it knows a bunch\nof facts about the training process\nknows a bunch of facts about you know\nwhat these that we're trying to get to\ndo but those facts sort of haven't you\nknow connected to the model's actual\ndesire to act because it just has some\nsome proxies that are still there that\nyou know it's that's using to determine\nwhat it does and so there's that sort of\ncreates this overhang where you know it\ntheoretically should be able to do\nbetter than it actually is because it\ndoes have the knowledge of what it is\nthat we want to do but it's not making\nuse of it effectively and so what sort\nof ripping out the proxy and replacing\nit just with like a pointer to its\nunderstanding of the world what we what\nyou know uh understanding of what we\nwanted to do\num it sort of resolves that overhang and\nsort of gets into a position where now\nthe model is actually making use of that\nknowledge that it has effectively and so\nthat's sort of a substantial performance\nof event because it solves all of these\nexamples where you know the model\nactually did understand what we really\nwanted to do but it didn't actually\ncorrectly act on that in training\nokay and then again once we reach this\npoint we're at a similar sort of\nequilibrium previously you know there's\nno additional reason to change the model\nin any direction because it sort of\nfully understands what we want and just\nyou know doing the right thing in\ntraining in every case\nokay\n[Music]\num great\nso this is sort of another path\num and so now we need to ask you know\nagain you know How likely is this path\nand I am again have a concern\nso here's my concern here so\num we talked previously right in you\nknow the first you know in the internal\nalignment case about the difficulty of\ngetting a proxy right there was perfect\nright difficult to keep getting a proxy\nthat really directly captured everything\nthat we cared about well the same sorts\nof difficulties also arise in getting a\ngood pointer so you can think about it\nlike this right so you know Martin\nLuther right needs to figure out you\nknow what it is that God wants to do you\nknow and so he's going to do that by\nlike reading you know a Bible but you\nknow which Bible right like how should\nhe read it how should he interpret it\nright you know these are all really\nreally tricky difficult questions\num that you know many different sorts of\nMartin Luther's you know different\npeople that have tried to interpret the\nBible you know have disagreed on and so\num if you want to make sure that in\nevery individual training example this\nwould of course the line model actually\ngets it right every time\num it's not enough to just sort of point\nto that understanding of the world right\nyou can have a perfect understanding of\nexactly how you know what the Bible says\nand what's going on in the world but not\nhave a perfect you know not be able to\nunderstand which of the pieces and which\nof the things in the world are the ones\nthat we actually care about right\num you know you have to be able to you\nknow understand if you're trying to like\nfollow human instructions that it's not\nyou know follow the instructions of you\nknow whoever is typing at the keyboard\nbut it's like following instructions of\nthe human right you know there's all of\nthese sorts of you know tricky\ndifficulties in actually being able to\nunderstand okay of all of these facts\nthat exist in the world which are the\nones that actually correspond to the\nthing that you know you're trying to get\nme to do right and that isn't a fact\nabout the world as much as it's just the\nthing that you you know there's a fact\nabout what we're trying to get to do\nright it's just a basic structural thing\nright can you understand all of the\nfacts about the world but it's not clear\nwhich of those facts necessarily is the\none that we wanted to be caring about\nright and that takes up some additional\nsort of you know tuning to find the\ncorrect way of interpreting which thing\nin the world it is that we want the\nmodel to be doing and so\nin the same way that sort of you know\ngetting a better and better proxy is the\nsort of long and arduous path where you\nknow there's sort of diminishing\nmarginal Improvement to getting a better\nand better proxy at each you know step\nalong the way\num there's also diminishing marginal\nImprovement to getting a better and\nbetter pointer right so you know\num eventually yes if it has you know a\nperfect pointer and understand exactly\nwhat we're trying to get to do it's\ngonna have perfect performance in the\nsame way that eventually if it has a\nperfect proxy and understand exactly\nwhat we're trying to get to do but but\ngetting to either of those situations\nstill requires this sort of long and\narduous path of incremental improvements\nwhere you know the pointer gets a little\nbit better you know the proxy gets a\nlittle bit better and eventually you\nknow they sort of match up with what\nwe're trying to get them to do but that\nlong and arduous path can be one that is\nyou know not necessarily super favored\nby grain descent it's one that you know\nin this High path dependence view uh you\nknow can take a bunch of steps and can\nbe a bunch of steps that have you know\nsmaller and smaller performance\nimprovements as we're going along\n[Music]\num and so in that case you know we're\nsort of concerned here um an important\nthing right to point out though is that\nwe're really relying on this fact that\nthe thing we're trying to get to do is\nreally complex\num and difficult to learn and it's\ndifficult to point to\num but that's going to be true in many\ncases where we're trying to get our\nmodels to do really complex difficult\ncomplicated important tasks\num and so when we're in that case you\nknow we're sort of dealing with this\nproblem that\num you know well there's just going to\nbe a bunch of things that we have to get\nthe model to you know get exactly\ncorrect if it's going to be able to sort\nof do the exactly the thing we want\num and that's sort of still the case in\nin either the corrigible or the sort of\ninternally aligned uh case\nokay\n[Music]\nOkay so\nuh we have path number three which is\ndeceptive alignment so uh How likely is\nacceptable alignment path so the\ndeceptive land half starts out very\nsimilarly to the courageable path we're\ngoing to imagine that we start with a\nproxy line model and then we're going to\nimagine that similarly to the corish way\nline case we have the sequencing in The\nCordial Direction where uh first we\nlearn the you know understanding of the\ntraining process before we have learned\nthe sort of correct perfect proxy\nokay\num and then once that's happened we sort\nof again have this overhang right this\nsort of you know overhang that we were\ntalking about in The Cordial case where\nthere was sort of performance\nImprovement lying on the ground where\nthe model theoretically knows in terms\nof its knowledge about the world a bunch\nmore facts about the training process a\nbunch more facts about what we're trying\nto get to do then it's actually making\nuse of in terms of it you know how it's\nactually you know taking its actions\nbecause it's just operating on some\nproxy and there's you know there's\nsomething has to happen right great\ndoesn't have it's going to do something\nto resolve that overhang and the courage\nline case the way that overhang was\nresolved was it you know ripped out the\nproxy and replaced it with this pointer\nto understanding you know these facts\nabout the world but in an assembly line\ncase there's something different the\ngrading descent can do that resolves the\noverhang in you know that also resolves\nthe O-ring so what can gradient descent\ndo well it can take the existing proxy\nwhatever it is and make it into some\nsort of long-term goal right so you know\nwhatever the existing proxy was that you\nknow it had\num if that proxy becomes a long-term\ngoal then given that the model has an\nunderstanding of the training process\nand understands how it works and what\nwe're trying to get to do if it you know\nhas some long-term goal that is\ndifferent than the goal that we're\nactually trying to get to do then it has\nan incentive to be deceptive right now\nsuddenly it has a reason to try to play\nalong so that eventually it can you know\ndo its long-term goal that it really\nwants\nokay and so in the deceptive case\nthere's an alternative modification\navailable to gradient descent the green\ndescent can you know can change the\nmodel in a different way than the\ncommercial case that also resolves this\noverhang right in both cases you know in\nthe deceptive line case it had more\ninformation about the world that it\ncould be using to have to take good\nactions that it wasn't right because you\nknow it it sort of had this bad proxy\nand the corresponding line case what\nhappens is it makes use of that\ninformation via pointer in the\ndeceptively aligned case what happens is\nit makes use of that information via\ninstrumental reasoning right it just\nreasons it's like well to be able to get\nyou know whatever this you know\nlong-term goal is the best thing for me\nto do is to do what this training\nprocess wants and so I'm going to you\nknow do exactly what the training\nprocess desires\nand a really really important Point here\nis that this instrumental reasoning\ndoesn't go through the same problem that\nwe were talking about earlier about this\nsort of difficult you know uh path uh\nyou know incremental path because\num the deceptive line model doesn't\nactually need to know the you know all\nof these sorts of same facts that the\ncourse relying model does about you know\nexactly how you know what thing to point\nto\num because\num the deceptive line model just figures\nit out based on reasoning through well\nI'm gonna just do whatever thing in fact\nresults in me doing the best you know\neventually being able to survive this\ntraining process right and doing that\nreasoning lets it you know figure out\nwhat are the things that it should be\ndoing right because the actual things we\nwanted to get to do in training are you\nknow doing those things are going to be\nthe things that most let it survive\ntraining and do whatever it wants later\nand so you know we could imagine a case\nwhere like you have a chords with a line\nmodel and the the pointer that the\ncoordinary line model has is to whenever\nthings let you survive the training\nprocess So eventually you can do some\nevil thing\num and that would work right but that's\nthe deceptive line model right that is\nwhat the deceptive line model is doing\nit's sort of like a cordial line model\nwhere it has some pointer that describes\nyou know exactly what it is that you\nknow in the world model that it should\nbe paying attention to but the way that\npointer is implemented in the\ndeceptively line model is by doing this\ninstrumental reasoning by figuring out\nokay look at your understanding of the\nworld and based on your understanding of\nthe world figure out which things you\nshould pay attention to such that if you\npaid attention to those things you would\nbe the most likely to be able to you\nknow get some long-term goal eventually\nin the real world\num and that's a really efficient and\neffective way to influence a pointer\nright and so in the same way we're\ntalking about the quarterly line case\nwhere there's this really difficult long\npath to be able to get a sort of pointer\nthat sort of you know actually is able\nto you know accomplish the correct thing\nthey deceptively align you know case it\ncan get there very quickly right all it\nhas to do is you know get some long-term\ngoal you know have some long-term goal\nthat you eventually care about and then\nall of this sort of performance overhang\nis immediately resolved because you know\nto the extent that the model had already\nthe information about what it was that\nthe training process actually wanted to\ndo as soon as it has the objective of\nokay playing along in the training\nprocess it's immediately going to be\nable to capture all of that performance\num uh yeah question\ntrue it's well for me yeah but basically\nthere doesn't seem to be much of a\ndifference in complexity I'm confused\nwhy you know to say this is more more\nefficient like an easier pointer to\nlearn in those cases you need to light a\nDefender pointer like\ndo the thing that the training process\nwants you to do uh in the deceptive case\nthere is an additional reasoning step of\nfor instrumental reasons so it just\nseems like the deceptive case is harder\nto get to\nyes this is a really good question so\nlet's try to break this down so I think\nthat one thing that is sort of really\ntricky here is distinguishing between\ntwo sort of Notions of complexity so we\ntalked uh so yeah so one notion of\ncomplexity is you know what things have\nto be in the algorithm right what things\nsort of have to be encoded in the\nweights what are the sort of structural\nproperties of the algorithm that need to\nbe specified for the model to be able to\ndo a good job and then there's another\nthing which is what things does the\nmodel have to figure out right what\nthings does the model have to figure out\nevery time you run it right when you run\nthe model and when it does a bunch of\nthinking what are the things that has to\nyou know discover in that thinking\nand here we're imagining well it doesn't\nmatter that much you know uh what\nhappens in terms of the you know in some\nsense all of these models eventually are\ngoing to have to discover the same\nthings in thinking right because they\nall fully understand what we want right\nand so to be able to fully understand\nwhat we want when you run them they\neventually have to get to the point\nwhere they they have the exact same\nunderstanding of exactly what it is that\nwe want what's different is the way that\nthey do it right what's different is the\nyou know what has to be encoded in the\nactual algorithm that results in them\nhaving an understanding of what it is\nthat we're trying to get them to do\nright and so the difference here is not\nthat they they don't have the same\nunderstanding eventually you know every\ntime you run them they're going to have\nthe same understanding because they\ndidn't have the same understanding they\nwouldn't be able to do a good job but\nthey get there in different ways right\nthe internally line model gets that that\nunderstanding by having it 100 fully\nhardcoded it just loads it straight up\nfrom the weights right the core is\nreally aligned model it gets it by uh\nhaving some pointer some function that\nthat said you know takes in the world\nand how outputs you know takes in its\nunderstanding of the world a bunch of\nfacts about the world and outputs what\nis the thing that I should be you know\ntrying to do in this case the\ndeceptively unemployment like The\nCordial online model also has this like\nfunction right but its function is\ndifferent right in the course of the\nline model we're sort of you know\nrequiring that function to be a function\nthat actually is correct right it is the\ncorrect function it's the one that\nactually does the right thing that\nactually takes in understanding of the\nworld and outputs the correct thing that\nwe humans would actually want to do in\nthat case but the deceptively line case\ndoesn't have to implement the correct\nfunction it can Implement a incorrect\nfunction it could implement the function\nthat is just you know do simple\ninstrumental reasoning right start from\nthe premise you know I want to\naccomplish some long-term goal and then\nfigure out which of these facts you know\nbased on that premise which of these\nfacts will be most you know relevant to\npay attention to to accomplish that goal\nand so\num the deceptive to the line model can\nhave a substantially simpler function\nthere that does that does that uh\ntransformation you know from facts about\nthe world to you know each individual\nruntime what is the thing that I should\nactually be paying attention to and\ntrying to accomplish in this individual\ninstance based on those facts\num and so that's that's the difference\nright is that the way that they actually\nget to you know what is the thing\nthey're trying to get to do is different\nand deceptively aligned case it doesn't\nhave to you know for you know let's\nthink about right you know in the course\nwith line Keys we were talking about all\nthese various facts about like you know\nshould I pay attention to the human in\nthe chair should I pay attention to you\nknow the human you know whoever's typing\nat the computer or like the actual human\nthose facts have to be hard-coded\nsomewhere right the the correct function\nhas to just directly load up I'm\nsupposed to look for you know the actual\nhuman I'm not supposed to just look for\nyou know whoever's on the computer\nbecause that's the actual correct\nimplementation of the function uh but\nthe deceptive line model doesn't have to\nhard code those facts right all the\ndeceptive line model has to do is have a\ngood general purpose reasoning system\nthat says based on these facts what is\nthe actual best thing for me to do to\nget the most you know uh you know do the\nbest on my long-term objective and so\njust does that computation\num and you know that's how you influence\nthat function and so because of that it\ndoesn't have to have all of these sort\nof difficult you know uh incremental you\nknow things that has to learn it just\nyou know runs this basic reasoning\nprocess and it should already have the\nreasoning process right like it doesn't\nneed to influence some new reasoning\nprocess right we're imagining it's\nalready a proxy line model it already\nhas good reasoning processes that are\nsort of generically there in the model\nand so it can just make use of the\nexisting reasoning processes and just\nuse them for the goal of you know trying\nto\num\nyou know accomplish them one triple yeah\nquestions\nokay I think this last sentence is the\none that's not your for me so that's to\nprove that this AG sort of AJ is\nsomething that we develop as a scientist\nAI\num their the lab that develops it its\ngoal is to\nhad then develop better cancer cures and\nchips and whatever I can imagine it\nbeating it already has some proxy girl\nwhich is sort of what we want it's\nalready pretty good at in job because it\nreally understands biology really well\nbut I imagine that it's plausible that\nfrom its Bird model the kind of steps\nthat are needed to follow or goal and\ndeceive people that by following the\ngoal these kind of things are not yet\nbuilt in so building this up seems to be\napproximately as hard as modifying its\npointer to the weather pointer\nI think that's a really good point I\nthink that and in fact uh probably I\ndon't know if we'll we'll get to it but\nwe might potentially much later talk\nabout you know you know proposals that\nsort of try to leverage that intuition\num I think that's basically correct that\nyou know in fact if you are training an\nenvironment where being able to do this\nsort of instrumental reasoning is just\nnot helpful for performance then um this\nthis path looks way less good because\nnow gradient descent has to actually do\nthe work of hard coding in all of the\nYou Know instrumental reasoning\ncapabilities\num rather than just being able to make\nuse of the existing circuitry right and\nin that case yeah I think I agree that\nthis path at least in the high path\ndependence case\num looks worse it doesn't it doesn't it\ndoesn't seem like it outperforms\num I'm mostly imagining a situation at\nleast here for our purposes right now\nwhere you're trying to get to do\nsomething very complex and very\ndifficult you know some you know very\ngeneral tasks we wanted to do you know\nall the things that you know humans want\nor whatever because we're sort of just\nimagining this like default case where\nwe're just like you know throw machine\nlearning and all of the you know\nproblems that we wanted to be able to\nsolve\num I agree that there may be things that\nwe could do to try to escape that and I\nthink that is you know one of the things\nthat we could do is try to just you know\nfocus on situations where that sort of\ninstrumental reasoning capability is not\nrelevant um I think the issue with that\nsort of approach though is that well\npeople do want AI is to be able to do\ntasks or instrumental reasoning is\nimportant right and So eventually you\nhave to be able to confront that fact\nright we have to as a society deal with\nthe fact that there are people who want\nAIS to do things that require\ninstrumental reasoning and we don't want\nthem doing that to uh you know be\ncatastrophic\nokay and what do you think half\nspecific this instrumental reasoning\nshould be so if we let's say that we uh\nthey are developing um an AI whose goal\nis to build a good Factory if it\ninvolves lots of instruments it needs to\nhave some sorts of instrumental\nreasoning ethnic because building a\nfactory has lots of instrumental Subs\nfirst you need to buy concrete and\nwhatever but still I can imagine that\nthe part that you are negotiating with\nsomebody who you need to deceive these\nthese circuitry would not exist at the\nbeginning at all and again it would need\nto build it from the ground up and maybe\nit's not working\nyeah I think it's very unclear there's\ndefinitely going to be some gray areas\nwhere like you might have some\ninstrumental circuitry but not maybe all\nthe instrumental circuitry it needs and\nin that case you know it's going to be\nsomewhere in the middle right it's gonna\nbe like maybe this path looks okay but\nit looks like less good than the case\nwhere it already had all the circuitry\nso\num yeah there's going to be a bunch of\ngray areas where you know if it has like\nno instrumental circuitry you know by\ndefault it doesn't need to solve the\ntask at all then this path looks the\nworst and if it's solving a task where\nthe instrumental circuitry is just\nalways necessary to solve the task\nregardless then this path's gonna look\nthe best\nYeah question\num just just to be clear\num\nis is the is the effectiveness\ndifference comes does this come from the\nfact that for a corrigible model it\nneeds to update\nthe pointer like a very complicated\npointer through SGD while the deceptive\nreal-line model needs to only update the\nLike Instrumental gold for SGD and\nyou're saying that the deceptively\nAllied model would do further reasoning\non top of that\num in it or pass is that where the the\neffectiveness difference comes true\nI think that's basically right yeah\nwe're sort of distinguishing between you\nknow what things does it learn via like\nhard-coded in the weights and what\nthings does it you know figure out that\nlike end up appearing right in the\nactivations right what things does it\nfigure out at inference time\num it doesn't necessarily have to be\nlike oh every single time it does the\nreasoning if you have a model that can\nlike cache beliefs and like you know\nstore information maybe it has a\nretrieval database or something then\nit's not necessary that like every\nsingle time you run it has to rerun the\ncomputation but yeah we're imagining\nthat essentially the information there\nabout like okay this is the thing that I\nshould be doing in this case was mostly\ngenerated by the model rather than\ngrading design right green descent just\nproduced the you know long-term goal\nYeah question\nit seems like both of these pads involve\nlike model learning about its proxies or\nits goals at the same time it's\nacquiring its knowledge so but like I\nfeel like in current ml systems or in\ncurrent systems like at Chachi PT or\ngpt3 with then plus rohf like gpd3 is\njust like acquiring the model\nunderstanding of the world and that our\nlhf then like gets or points to like a\nspecific gets a model to point to a\nspecific goal that's already learned so\nin this case which would be selected the\ninstrumental gold or the actual goal\nbecause in the case where case we're\nsort of trying to put the goal into the\nend and you start with pure Knowledge\nLearning you're not going to get this\ninstrumental path where it learns the\ngoal first right now whether you get the\ncordially aligned path the deceptive\nline path or something else there you\nknow it's not these this these you know\nthree paths are not exhaustive right um\nand again we'll talk later about you\nknow other sorts of things that you\nmight get when you're trying to train\nyou know this sort of predictive um you\nknow like a like language model\num but at the very least yeah I think\nthat the the thing you're saying is at\nleast some evidence that the internally\naligned path specifically is really\nunlikely in the case of um you know\ndoing this sort of fine tuning regime\nokay uh yeah more question uh I'm sorry\nI'm not quite sure why the eternally\nlike that wouldn't be unlikely in this\ncase because I feel like that would have\nat least the same probability as the\ndeceptive or the a courageable One path\nin this case because like you just need\nto choose any goal right and if like\ninstrumental goals and core and Court\nyou know the terminal goal is like\nequally likely to play selected in this\ncase since them all as just as a\nknowledge why wouldn't the uh adrenal\nrewind copy equally Lively\nyeah so the problem is that the internal\nline path sort of hard codes the goal\nright rather than you know having it be\ncoded in terms of the model's existing\nknowledge but if the model starts with a\nton of existing knowledge including a\nbunch of knowledge about exactly what\nwe're trying to get to do then it's\ngoing to be a way shorter path and\nsubstantially you know better to just\nmake use of that existing knowledge\nsomehow right because it's already there\nright and so it you know it's not going\nto want to re-implement it in you know\nsome hard-coded proxy and so in that\nsense you know it's going to be\nsubstantially more favorite when you're\nstarting with that knowledge already\nexisting to go down either The Cordial\nor the deceptive path\nOkay so\num\nuh okay great yeah so we've been talking\nabout like How likely is acceptable line\npath the basic case uh is that you know\nturning your existing proxy into a\nlong-term goal is a relatively simple\nmodification and importantly it's a\nmodification that doesn't have this like\nreally long path it's a relatively\nstraightforward modification once you do\nit once you sort of go to a long-term\ngoal you capture all of the gains from\nthat you know overhang right it doesn't\nhave to like keep improving the\nlong-term goal to be more and more\nlong-term right once you get the\nlong-term goal then as soon as you have\na longer goal it's going to do all the\ninstrumental reasoning necessary to\nfigure out you know exactly what the\nthing is that it should be doing to get\nyou know the most performance in\ntraining and so it doesn't have this\nlong and arduous path in the same way\nthat we've been talking about earlier it\nhas this single you know thing that it\nhas to do and once that thing happens it\ngets all of the sort of performance from\nthe overhang and so because of that you\nknow it is sort of a path that we might\nexpect to be favored in this sort of\nhigh path dependence case now there's a\nbunch of assumptions right you know so\none important thing right is that we're\nyou know we're doing something that's\nyou know um really complex goal that you\nknow like we were talking about also\nwe're doing something where there\nalready exists go to instrumental\nreasoning circuitry but if we're in that\nsetting then this pass seems um very\nfavorable\nokay so that's the high pass dependence\ncase\num now we're going to shift gears and\nwe're going to talk about the low path\ndependence case so we want to redo the\nwhole analysis that we just did but\nunder a different view of the inductive\nbiases right a different way of thinking\nabout you know what is what it is that\ndetermines which algorithm we get\nokay so again we're going to be assuming\nthe model has to fully understand what\nwe want in terms of what model classes\nwe're looking at uh we're going to be\nimagining\num uh yeah but then to be able to\ndistinguish between these model classes\nwe have to look at you know what are the\nthings that are going to matter uh in\nterms of you know properties of an\nalgorithm that you might look that you\nknow the low path dependence sort of you\nknow inductive biases might be looking\nfor you know what does it mean for an\nalgorithm to be structurally simple\nand in particular we're going to isolate\ntwo things and I sort of mentioned\nalluded to these earlier where thing\nnumber one is Simplicity and and number\ntwo is speed so\num what is the difference between these\ntwo things and what do they really mean\nso it's a little bit tricky what they\nmean so I'm going to try to unpack them\nso Simplicity is how complex is it to\nspecify the algorithm in the weights\nright if I were to write down you know a\nPython program that ran exactly the\nalgorithm that might you know uh was\nbeing implemented my my model how can\nhow many lines would that program take\nright um we can think about this also\nright you know if we think about you\nknow how complex is the algorithm of\nright if we're thinking about something\nlike the wheels uh you know windows on\ntop of Wheels detector how complex is\nthat basic structure right you know\nfirst you have to have a window detector\nyou know you need some window detection\nfunction and then some wheeled section\nfunction and then you need to write the\nyou know combine these two function\nright and so we're sort of trying to\nunderstand how complex is that circuitry\nright how complex you know is the\ndescription of the circuitry that the\nmodel has to implement right\num\nthe thing that this doesn't capture\nright so the thing is explicitly not\ncapturing is the complexity of actually\nrunning that circuitry right it's only\nonly the sort of Simplicity captures the\ncomplexity of describing the circuitry\nand then speed captures the difficulty\nof running that circuitry now an\nimportant thing to point out is that in\nmany realistic architectures it is in\nfact the case that they actually all\ntake the same finite amount of time to\ncomplete and so we need to be really\nclear here that what we're talking about\nis not the literal algorithm that is\nimplemented by the you know uh weight\nmatrices by your actual you know uh\nneural network we're talking about is\nthe sort of the structurally you know\nsimple algorithm that is sort of behind\nit right you know what is the algorithm\nof you know the core thing that it's\nactually doing is like look from Windows\nlook for reals combine them right and we\nwant to understand for that core thing\nthat it's doing\num you know\nhow you know match computation does that\nyou know algorithm take and how\ndifficult is it to describe so what some\nways of thinking about why these two\nthings matter right so why does it\nmatter you know how complex is it to\ndescribe the algorithm well it matters\nhow complex it is to describe because\nthat matters uh that affects things like\nyou know how large is the Basin going to\nbe because the more complex it is to\ndescribe\num you know the the fewer the the more\nparameters it's going to take to be able\nto describe it and so you know it's not\ngoing to be the case that there's going\nto be a bunch of different possible\nparameterizations which all correspond\nto the same function but if it's very\nsimple to describe then there may be\nmany parameterizations which all\ncorrespond to the effectively same\nsimple structural function right so\nSimplicity matters and speed also\nmatters so why does speed matter well\nit's a little bit trickier so one way to\nthink about this is if we're thinking\nabout the space of all possible\nalgorithms that you know one could have\nin theory well just to start with only\nsome of those algorithms are actually\nimplementable in a neural network right\nbecause it actually is a finite function\nand you know it can only Implement so\nmuch computation and so we should expect\nthat things which algorithms which\neffectively take less computation will\nalso be you know more likely for us to\nfind given that we don't know exactly\nwhich ones are going to be included in\nthe space of you know which you know at\nany given point you know is our Network\nlarge enough to be able to implement it\num but if it's you know if it's\ninfluencable you know via a smaller\nNetwork then you know it might be more\nlikely that we'll find it earlier\num similarly if the model if you know if\nit's an algorithm takes a bunch of\ncomputation\num and there's another model that\naccomplishes the exact same thing but\ntakes less computation then that means\nthere's sort of extra you know\ncomputation there's sort of extra\ncomputation available to sort of do\nother things like you know spend extra\nyou know computation you know slightly\nimproving its performance in various\nways and so\num you know both of these two things\nmatter to some extent in terms of\nunderstanding you know How likely is a\nparticular algorithm now they're\ndefinitely not the only things that\nmatter even in a low path to penance\ncase where we're imagining that the only\nthing that matters are these sorts of\nglobal structural properties of the\nalgorithm even in that case there's\nalmost certainly other Global structural\nproperties that matter too\num but these are at least two that do\nseem like they will play a role and are\nones that we can sort of analyze and so\nthat's we'll be imagining that these are\nthe main two that we're sort of going to\nbe looking at uh yeah question before we\ngo\nSimplicity as well like let's say I had\na 10 line neural network and I have two\nalgorithms one requires 10 successive\ncomputations so the only way to do it is\nto be in all 10 networks in all 10\nlayers and the other has two layers\nso that means there's nine different\nways it can actually be instantiated it\ncan go from layers one to two all the\nway through to layers nine to ten so put\nit faster algorithms also be simpler to\nspecifying someone\nuh yeah I think that that is a really\ninteresting point I think that\num it seems at least plausible I don't I\ndon't know uh you know in fact how it\nworks out uh in terms of you know how\nthese things interact I definitely agree\nthat like you know there's certainly an\ninteract to some extent and there are\nvarious models of trying to understand\nhow they interact I think one model of\nsort of trying to understand how these\nthings interacted is I I think is sort\nof reasonable is like a circuit prior\nmodel where you sort of try to\nunderstand you know if we think about\nyou know algorithms as being described\nby the Boolean circuits that are\nnecessary to implement them then you\nknow we can think about the inductor\nbiases as selecting the algorithm which\ntakes the fewest number of Boolean\ncircuits and that in some sense sort of\nis capturing the thing you're saying\nwhere it's sort of a mix between speed\nand simplicity where the sort of faster\nmodels also take fewer circuits to to\nimplement\num\na lot of those sorts of priors that were\nharder to understand they're harder to\nsort of uh figure out you know would the\ndeceptively aligned or the non-deceptive\naligned actually do better\num and so we're going to imagine that uh\nwe're sort of gonna be thinking about a\ncase where\num you know we're just gonna be looking\nat those two specific things uh question\nyeah I didn't want to ask a question\njust make comment on the previous\nquestion uh an example of like the case\nwhere there would be difference between\nSimplicity speed is imagine if uh the\nin every layer you only need like\none neuron for this algorithm but you do\nneed non-linearity so you do need the\nvalue so you do need like Tech and\nsecular layers but you need only one\nneuron in each so this algebra would be\nsimple to implements very few ways to\nneed it but it will be like long because\nso many steps are needed\nyeah it's a good example yeah\nyeah they're definitely going to matter\nto some extent uh we don't know exactly\nlike what the mixes of them you know and\nhow things play out in terms of you know\nwhich one of these is is most important\nbut we're just going to look at both and\nwe're going to try and understand under\neach one of these regimes how well uh\nyou know do these different model\nclasses perform\nokay so we're gonna start with\nSimplicity uh and I'm going to start\nwith a really simple argument for you\nknow trying to start getting our hands\non Simplicity\num I think one way to sort of think\nabout Simplicity is just how many\nalgorithms are there you know how many\npossible ways are there of implementing\na particular algorithm right so we've\nyou know we're talking about this as\nthis relationship between Simplicity and\nbase and size right where the more ways\nthere are in influencing the same\nalgorithm the larger the Basin is\num and so we can sort of understand well\nokay effectively how many different\nsorts of algorithms are there which fall\ninto the each one of these model classes\nand in the same way that you know the\nsort of number of possible weights that\nimplement the same algorithm affects the\nSimplicity of that algorithm the number\nof algorithms which Implement which you\nknow fall into the same model class\naffects the overall Simplicity of that\nmodel class and so we're going to start\nwith a counting argument right how many\nalgorithms are there that fall into each\none of these classes uh of algorithms\nuh so how many uh you know Jesus Christ\nare there right how many you know\ninternally line models are there uh or\nat least how many algorithms are there\nyou know effectively different\nalgorithms influence the Jesus Christ\nmodel I think the answer here is\nbasically one right there is only one\nway to sort of hard code exactly what it\nis uh that we want the model of doing\nright any sort of difference in that uh\nyou know would would you know be a\ndifferent model right we would no longer\nbe the Jesus Christ it wouldn't be doing\nthe correct thing\nuh okay but how many you know Martin\nLuther's are there well uh there's a\nbunch of different ways of looking at\nthe Bible that are all the same right\nyou can have a bunch of different ways\nof you know thinking about understanding\nexactly what it is they're trying to get\nto do that all Converge on the same\nthing now we talked previously about you\nknow that can be difficult right it can\nbe difficult to get the exact correct\npointer but that doesn't there's still\nmultiple of them right you know there's\nstill like you know\nunderstanding exactly how to read the\nBible plus any one individual Bible that\nexists in the world will all yield the\nsame results as long as they have the\nsame contents and so it's still the case\nthat there are multiple different\nLuthers there's still multiple different\nways of pointing to the to the thing\nthat we watch\nand similarly uh you know if we think\nabout pascals well there's even more\nsort of pascals because any long-term\ngoal that you could possibly have would\ncorrespond to a pascal model because any\nuh you know any long-term goal will\nresult in the thing mean deceptive as\nlong as it has something that it wants\nto accomplish eventually in the world\nthen it's going to want to you know play\nalong and be deceptive uh for the\npurpose of doing so on so you know just\nbased on sort of initial analysis it\nseems like there's the the largest\nnumber of models occupy the Pascal uh\nyou know model class the deceptive of\nthe line model class\nokay uh we can do it we can sort of dig\nin a little bit more if we sort of try\nto think about um you know what's going\non here right that's causing this sort\nof Pascal's uh you know to be uh sort of\nmore numerous well um we can think about\nsort of the Simplicity and the\ncomplexity of an individual model as\nbeing described by the complexity of\nsort of three pieces right we have the\nworld model it's understanding of the\nworld we have like its optimization\nprocedure you know its ability to do\ninstrumental reasoning to do all these\nsorts of tasks and then we have the mace\nobjective right whatever it is that is\ntrying to use that reasoning to\naccomplish\num you know we talked previously about\nlike you know cases where you know the\noptimization procedure or the world\nmodel can be different and that can\nreally affect which thing you end up\nwith in this case you know if we're\nimagining a case where we're really\ntrying to get to do some really complex\ntask that requires you know general\npurpose instrumental reasoning and it\nrequires it to basically know you know\nall the various facts about the world\nthen all three of these model classes\nshould have in that specific case\nessentially identical World models and\noptimization procedures they should have\nto be able to do the same reasoning they\nshould have to be able to have the same\nknowledge and so then in that case the\nonly thing that will differ between them\nis the complexity of the remaining thing\nwhich is what they're trying to get what\nthey're trying to do and in that case\nyou know the complexity of what they're\ntrying to get to do well\num\nif we think about that we can sort of\napproximate it as well what is the\nsimplest model uh the simplest objective\nwhich falls into each one of these\nclasses\num is going to be you know what is the\nsimplest base objective right it falls\nin each one of these classes we can\ncompare the complexity of the simplest\none as another sort of metric of trying\nto understand how diff how simple are\nthese different classes right so our\nfirst metric up above was just how many\ndifferent algorithms are there that fall\ninto that model class and that sort of\ncan help us get some understanding\ncomplexity and then the second thing\nwe're going to look at here is you know\nfor each individual model what is the\nsimplest model in each class and how do\nthose simplest models compare and to do\nthat if we're imagining they have the\nsame world model and the same\noptimization procedure then the simplest\nmodel Lynch class is just a question of\nwhat is the simplest Mesa objective in\neach different class\nand so in the internal aligned case well\nthe simplest main subjective is going to\nbe a full specification of whatever it\nis we're trying to get to do and so well\nthe complexity of that is going to\ndepend on how hard it is how complex it\nis the thing we're actually trying to\nget to do but if we're trying to get to\ndo something really complex then this is\ngoing to be really complex\num similarly with the quarterly line\ncase well what is the simplest\ncoordinary line May subjective well it's\nwhatever the simplest specification is\nof how to point to the you know the the\ndesired objective right and like we were\ntalking about previously if the thing\nwe're trying to get to do is very\ncomplex this can be very complex it\nrequires you know hard coding a bunch of\ninformation about exactly which things\nin the world are the ones that we want\nthe model to be paying attention to\num and then you know for this after the\nline what is the simplest deceptive line\nMay subjective well it's whatever the\nabsolute simplest possible long-term\nobjective is right you know make paper\nclips or whatever you know is this sort\nof you know thing that we get here right\nlike anything that eventually you want\nto accomplish in the world uh that is\nlong-term that you know you care about\nthe long-term consequences whatever the\nsimplest possible thing is to implant is\nthe you know simplest possible set for\nthe line objective now that doesn't mean\nright an important you know thing to\npoint out here is it doesn't mean that\nyou would literally get the simplest\nthing in each case right we're not\nsaying the simplest thing is actually\nthe most likely we're saying that\nlooking at what the simplest thing is in\neach class can give us a sense of how\nsimple the overall class is in the same\nway that looking at the number of things\nin the class can give us a sense of the\noverall Simplicity of that class right\nand so we're saying well the fact that\nthe simplest thing in this class is\nreally simple it's just like the\nabsolute simplest thing that you could\ncare about anything in the long term\num you know is is indication this\noverall model class is very simple in\nthe same way that the the fact that\nthere's so many different models that\nfit into the class is also a similar\nindication\nto link it to the previous discussion of\nthe difference between effectiveness of\ncourageable inceptively lines uh sorry\nthe efficiency of the processes\num you're saying this is the simplest\nthing to do or this is the only thing\nthat we have to do is point to the\nlong-term objective of the deceptive\ncase because\nwe do not need to encode to make a\npointer to the training objective in the\nweights right because we we the model\nthinks about that\num\ncould think about that in the\nactivations\nthat's exactly correct yes we're\nimagining that the deceptive line model\nsort of figures out the thing we're\ntrying to get to do in the activations\nit figures it out you know uh on its own\nrather than rather than sort of it being\nhard-coded in the algorithm itself and\nso because of that the only thing that\nwe need hard-coded in the May subjective\npart is just some long-term objective\nand then the sort of world model and\noptimization procedure take care of\nfiguring out what it is that you know\nwould actually do a good job on that\nobjective\nokay so this is the Simplicity I think\nthat\num editorializing a little bit I think\nmy uh my sense is that this is the the\nstrongest case for deceptive alignment\nuh of any of the various different\nthings that we're looking at if you were\njust looking at purely Simplicity\num then I think the keys for deceptive\nalignment is very strong\num I think it's stronger than the high\npath dependence case or in general in\nthe low path dependence case because as\nwe'll see some of the other the the\nspeed is is much less clear\num so let's look at speed so uh how long\nyou know how much computation did each\none of these algorithms take well uh I\nthink that the most basic thing the\nfirst thing is that deceptively align\nmodel seems to be taking more\ncomputation right so why is that well\nthe deceptive align model has to do this\nextra step of instrumental reasoning\nright it has to go through this process\nof okay starting from the fact that uh\nyou know I want to get some long-term\nobjective I then have to figure out\ngiven that I want to do some long-term\nobjective what is the best way to\naccomplish that and you know given that\nI'm in some training process you know\nwhat is the best thing to do for\naccomplishing that training objective\nand you know therefore I should do this\nparticular training objective right and\nthat is a chain of instrumental\nreasoning which takes time and\ncomputation to actually be able to run\nright and figure out\num and so the deceptive line model is\ndoing extra steps of reasoning extra\ncomputation that the the other models\nare not doing and so because of that you\nknow the deceptive align model seems to\nbe the slowest in the sense that it is\nsort of taking the most steps of\ncomputation\nokay and the course of the line model\num is sort of uh take does you know does\nless computation uh I think that the\ndifference between the corrigible line\nand the internal line model is a little\nbit unclear here I think there is some\ncase to be made for why the quarters\nrely model might be slower because it\ndoesn't get to just sort of like\ndirectly load up exactly you know what\nthe desired thing is it has to still\nfollow the pointer in some sense and so\nthat takes some amounts of computation\num but it seems like the the difference\nbetween the deceptively line model and\nthe others is at least is larger where\nthere's this re there's this like\nsubstantial extra thing the deceptive\nline model has to do which is like\nreason about what a good you know what a\ndeceptive model should do you know you\nknow what should I do if I'm trying to\nplay along right that takes reasoning\nand that extra reasoning adds to the\noverall computational cost of this\nalgorithm and so the deceptive line\nmodel sort of seems to be the worst on\nthe speed uh case Yeah question\nso it seems like one thing that could\ntake time is like reasoning about how\nyou could be deceptive but it also seems\nlike just doing the task itself would\ntake time so how do you know that the\namount of time that's being taken by the\ncomputations isn't just for the task\num couldn't like the deceptive model\nmask the fact that it's like taking\nextra time uh to the fact that like it's\ntaking extra time to do the task itself\nthis is a really interesting question so\nit's totally the case that you know\nthey're gonna have to take a bunch of\ntime to actually do the task itself\nwe're trying to factor that out and\ntrying to understand okay what is the\ntime you would have to take in addition\nto the thing you have to do to actually\nsolve the task and the thing that the\ndeceptive line model has to do in\naddition is figure out what the task is\nright like oh that it should be trying\nto play along in this case now\num I agree that it could be the case\nthat it could like find a way to hide\nthat by like doing it in parallel or\nsomething\num\nyou know another way that it can hide\nthis is via like a retrieval database of\nsome variety where it's able to just\nlike store and cache its previous\ncomputation and then load it back\num those could definitely be cases where\nthis sort of thing would get trickier I\nthink one thing I'll point out about the\nlike doing in parallel is going back to\nlike why would we even expect that like\nspeed bias would matter at all well the\nreason we'd expect it to matter at all\nis that you know to the extent that it's\nsort of using it it has like you know\nsome finite amount of computation\navailable to it and to the extent that\nit's using some of that computation to\ndo other stuff that's not directly\nnecessary then\num\nyou know it sort of wasting some amount\nof computation that it could be using to\nlike you know try to do other things\nthat are better you know for\naccomplishing the task uh it you know\nmeans that it has to be influence on\nmaybe a slightly larger model or you\nknow slightly different type of model\nand so that can sort of push you push\nyou back but again it's not it's very\nunclear right like you know it may be\nthe case that you have a really really\nlarge model to theoretically has the\ncapacity to implement any of these\nthings and yet you still end up\nimplanting you know the simplest one or\nthe fastest one or whatever or similarly\nyou have a really small model uh you\nknow but the Simplicity bias is really\nstrong so it's a little unclear exactly\nhow this sort of ends up but um\nyou know at least if we're sort of\nthinking about in this very abstract\nsense where we're just looking at speed\ncomplexity right it seems like on the\nthis sort of speed complexity case\num the deceptive line model loses how\nthat actually cashes out in terms of\nlike how these various like theoretical\nfactors influence the actual practical\ninductive biases is a really difficult\nthing to determine\nokay uh all right so I think that this\nis sort of nice right it seems really\ngood that speed bias sort of uh seems to\ncut against deception\num but there's a bit of an issue here\nwhich is that I think speed bias has a\nbit of a problem which that it may be\nsort of uncompetitive in the sense that\num if we try to build uh training\nprocesses which directly incentivize\ncomputationally fast algorithms we may\nbe losing a lot of what makes machine\nlearning good in the first place so we\ntalked sort of at the very beginning of\nlike you know well why how do we how do\nwe construct machine learning processes\nright like why do we build these machine\nlearning models and how do we how do we\nmake them work well you know the thing\nthat we often will do is try to select\nlost Landscapes you know select our you\nknow machine learning process to have a\nyou know to to bias towards very\nstructurally simple algorithms because\nwe believe you know something like\nOccam's razor that's structurally simple\nalgorithms do a good job at fitting the\ndata and so if we are trying to modify\nour training process to not do that to\nnot be biased towards structurally\nsimple algorithms we run into a problem\nof well you know now we might not\nactually be able to find algorithms just\ndo a good job at fitting it down\nso uh you know going back to something\nwe talked about you know in the first\nlecture uh that I think is sort of\nrelevant here is we can think about you\nknow let's say I in fact practically try\nto do this\num I I try to select the model which has\nthe following property it is the you\nknow smallest model which does a good\njob of fitting the data in some sense\nthat's sort of an approximation of what\na speed bias would be right it's saying\nwe want the smallest model which fits\nthe data well and so this is the double\ndescent curve from earlier so we can say\nif we want the smallest model that fits\nthe data well then what we want to look\nfor is we want the case where the train\nloss you know the training performance\nreaches its optimal\num the first point that happens as we\nvary model size right so as we slowly\nincrease the model size we want to stop\nonce we reach the smallest model that\nhas you know good performance and so we\ncan do that the Blue Hero corresponds\nthe blue and the green corresponds to\nthe green we could do that we can look\nat what happens when we reach zero train\nloss and then how well does that perform\non generalization tasks right and the\nanswer is it's the worst point in the\nwhole graph if we look at you know how\nan individual model size performs on\nactually generalizing to Downstream\ntasks\nthe worst generalization performance\noccurs precisely at the point where the\nmodel first reaches optimal training\nloss\num and I think one way that you can\ninterpret What's Happening Here is that\nyou know we talked about this sort of at\nthe beginning it's sort of this is the\npoint at which it's sort of forced to do\nthis sort of fast algorithm it's forced\nto just like memorize things rather than\num actually sort of implement the you\nknow actually structurally simple thing\nand so it actually ends up doing a poor\njob right\num and so that's a concern because it\nmeans that we might not actually be able\nto get the sort of speed bias in\npractice while still being able to have\nmachine Learning Systems which do a good\njob on tasks right in some sense we're\nsort of forced to have some amount of\nspeed bias because you know we don't\nwant to you know waste a bunch of\ncompute on things but if we sort of push\ntoo hard in this direction then um you\nknow we lose the thing that we wanted to\nget out of machine learning in the first\nplace\num now that might not be entirely true\nso one fact here is that this speed bias\nis looking at the size of the entire\nmodel but instead of looking at the size\nof the entire model we can instead look\nat something like the depth right how\nmany sequential computations are\nrequired that's a different type of\nspeed bias than the total amount of\ncomputation required and if we look at\nthe depth then it's actually much less\nclear\num it actually often does look like you\ncan you know substantially decrease\ndepth and still have good performance so\nso it's not the jury is not entirely out\nhere it may still be the case that there\nare other things that we can do to sort\nof make speed bias competitive to be\nable to you know create training\nprocesses which incentivize you know\nFinding algorithms which are you know\nstructurally use less computation\num but it's it's a little bit tricky if\nfor us to actually be able to find ways\nto do that which actually give good\nalgorithms\nyeah questions\nhow do you think this changes if we move\nto more recurrent architectures or\nTransformers with adapters to Future\nI mean there were still hard coding in\nthe apartments for shorter big time\nI think that's a really good question so\nI think it does change things so in\nparticular it sort of introduces what I\noften will call an average the bias as\nopposed to sort of Max Speed bias so\ninstead of saying uh you know we want\nthe total amount of computation that the\nalgorithm uses on any input to be capped\nat some value we're instead saying on\neach individual input we want you to use\nas little computation as necessary to\nsolve that particular task and I think\nthat that sort of average speed bias\nactually has reason to believe it might\nbe more competitive and better than the\nmax speed\num and so I think there absolutely are\nroom for potentially making use of\nsomething like that yeah so an example\nof something that makes\nsomething like Chain of Thought where\nyou sort of Select for you know the\nmodel thinking for a smaller amount of\ntime and still getting the right answer\nis sort of implementing something like\nan average speed bias so that might be a\nway to sort of you know get around this\nyeah\nmore questions\nthis speed priority\nis that something that a someone has\ntried and B do we have the engineering\nof capacity to actually try it as an\nexperiment as all like now do we know\nhow to actually increments people as\neven if we think it's a good idea to try\nuh well it really depends on what we\nmean by speed right so we can look at\nthis graph here and can be like well\nthis graph is implementing a speed bias\nright what we did was we just look at\nyou know the the first thing that gets\nzero train loss as we vary the size of\nthe model and then we select the thing\nright and this doesn't doesn't work in a\nsense that it yields bad performance now\nyou could imagine doing something very\nsimilar to this where instead of looking\nat total model size you're looking at\none of these other sorts of things uh\nand varying it and then finding the you\nknow the first thing that does a good\njob you could also just add this as a as\na regularization term um there's lots of\nways that you could try and implement it\nI think the tricky thing is knowing what\nsort of speed bias to look for and\nactually believing that that speed bias\nwould be sufficient right you know\nyou're still going to have the\nSimplicity bias right you're still going\nto have all these other biases so you\nactually have to have some reason to\nbelieve that the speed bias that you're\nadding is actually going to do enough to\nget you the deceptive that you know to\navoid the deceptive align model right\nand that's a really tricky thing to\nbelieve and to have any confidence in\nyeah\nbut I was just speaking it would be\nuseful empirical experiment in order to\ndetermine the competitiveness of speed\nbias ignoring whether it would actually\nwork for the time bit which is much\nharder to determine empirically\nyeah so so like I said we have stuff\nlike this right here where you can just\ncheck you know the scaling laws right of\nhow these sorts of things change\num but um yeah I mean checking that for\nLots in various different ways of\nsetting up the training process is\nabsolutely something I think is valuable\nYeah question this Affairs me why this\ngraph shows speed versus Simplicity\ncould you ever come about that\nI would imagine that for Speed would\nneed to look at the number of layers\nthought the total counts uh uh\nparameters which this shows\nyeah yeah good point so I think the the\nkey distinction here is the different\ntypes of\nso there's like total computation speed\nand is like you know total amounts of\ncomputations and then there's like Max\namounts of computations uh right like\ndepth versus total size\num those are both computation measures\nright like the amount of total\ncomputation that it does is a measure of\nof speed in some sense\num as is the maximum amount of\ncomputation on any individual path\nthey're just different metrics right you\ncan think about if you're thinking about\nlike you know parallel computation you\nknow you can either think of the total\namounts of computation done across all\nof your parallel threads or you can\nthink about you know what is the time\nthat it takes for the last thread to\nfinish right and those are both metrics\nof computation there are ways of\nthinking about how much computation did\nthis algorithm use but they're different\nmetrics right um and so this is I agree\nlooking at one of those metrics it is\nlooking at the total computation metric\nand it is not looking at the max\num you know the of any particular\nparallel computation\num I think the thing I was saying\nearlier I think the maximum any\nparticular parallel computation uh looks\nbetter than this this is this is the one\nthat probably looks the worst\nwhy uh unclear uh I think that you know\nyes the question is why I guess\num the answer would be well uh it seems\nlike you know a very you know passing\nanswer I could say I think that you know\nthere's something sort of right about\nbecause razor going on here where you\nknow in the real world actual real world\ndata and real world patterns are you\nknow more distributed according to\nsomething like a average you know speed\nbias or a you know Max parallel\ncomputation bias rather than a total\ncomputation bias why is that It's tricky\nto understand you know it goes through a\nbunch of facts about like you know how\ndoes the world actually work and why is\nit the case that you know particular you\nknow patterns exist in the world and\nother patterns don't um it's you know\nit's sort of a really thorny\nphilosophical question in some sense\nabout like you know what makes you know\nyou know certain hypotheses more likely\nto fit real world data than other\nhypotheses\num but you know it sort of has to in\nsome sense go through that\nokay so uh sum it up uh you know what\nwe've talked about today so uh uh\noverall I think think my takeaway is\nthat it seems like both low and high\npath dependent scenarios uh you know\nseem to be in a in a state where green\ndescent is going to want to push your\nmodel to be deceptive now this relies on\na bunch of assumptions you know we're\nsort of thinking about we're in a case\nwhere we're training on some really\ncomplex thing we're trying to get it to\ndo some really complex task\num and you know we're in a case where\nyou know we sort of have to push to\nreally you know large models uh to be\nable to solve this problem\num but um it seems to me like there's\ngoing to be incentives that are pushing\ntowards the deceptively aligned model\num in in both these cases\num or at least a a reasonable\nprobability in each case of the\ndeceptive line model being favored\num and that sort of seems to suggest\nthat we're going to have to create some\nintervention right something needs to\nhappen I sort of mentioned this\npreviously right you know if this is the\ndefault scenario right if machine\nlearning by default if we just sort of\nrun it on some complex task it gives us\nsomething you know something like this\nright there's there's biases pushing Us\nin the different directions there's some\nreason to believe the inductive biases\nwith yield deceptive line models some\nreasonably if they wouldn't overall it\nseems like the deceptive line model is\ndoing pretty well they have a lot going\nfor them on on you know in both sorts of\nstories\nand so we probably need to introduce\nsome intervention at least if we want to\nbe confident that our you know we're not\ngoing to get deceptive line models we\nneed to introduce some intervention that\nchanges those inductive biases some\nintervention that changes the way that\nour trading process works to incentivize\nit to you know not produce the deceptive\nline you know model\num there could be lots of things like\nthis you know we talked about\ntransparency interpretability you know\nsome way of looking at the model and\nrejecting it if it's not doing the thing\nwe want\num you know some way of you know\nchanging the biases you know more\ntowards speed and rather than Simplicity\nthere are lots of things that we might\nbe able to do you know maybe use trying\nto train it on a really simple objective\num or you know in an environment where\nit doesn't need to do lots of um\ninstrumental reasoning right there's\nlots of things that we can imagine\nchanging about this default scenario\nthat might help but we have to do\nsomething uh because it seems that by\ndefault\num deception is is reasonably likely to\nbe favored as the sort of you know best\nalgorithm according to these inductive\nbiases in both these scenarios\npotentially\nokay so right so an important thing I\nwant to point out is you know we can\nthink about this as sort of the\nalignment Gap right there's some\ndifference between the size in of the\nBasin for the good models and the size\nof the Basin for the deceptively aligned\nmodels right and if we think by default\nyou know there's some chance the\nreceptive line models are going to\noccupy a larger Basin they're going to\nbe more likely for us to find in\ntraining then we need to create some\nintervention that changes that right\nthat provides additional sort of bits of\noptimization uh you know additional\npressure towards\num the you know the good models over the\ndeceptive line model so you know how how\nmuch uh you know if you know how how how\nmuch evidence we have to sort of\ncondition this prior on where the prior\nis like you know over what sorts of\nmodels we get by default until we end up\nin a situation where we're going to get\nthe models that we want you know how\nmuch uh additional sort of change do we\nhave to make to our training process\nbefore it puts us in a situation where\nwe get the desired models\num and this Gap matters a lot right the\ndifference in size of these basins right\nthe difference in you know how likely\nyou are to get each one by default is\nextremely important right if you're in a\nsetting where there's a massive\ndifference in you know the default Basin\nsize and the default likelihood of\nfinding one or the other then you have\nto sort of make some really big you know\nmassive structural change to your\nmachine learning process whereas if it's\nonly a small difference then you know\nyou can get away with much smaller\nchanges to the machine learning process\nto you know change\num you know to try to escape the sort of\nstepping line models\na stance on exact\nlarge I think the Gap is where I think\nit is\nbut this is sort of I think of good\nframe to be thinking about\nwe're sort of going to be trying to\naccomplish is you know how do we end up\nin some situation where we can provide\nsome intervention the changes that you\nknow default path of how what sort of\nalgorithm you're going to get uh to to\nnot select for deceptive alignment\num uh yeah and so that is that's the\ntalk so uh you know we're going to be\ntalking next time sort of about you know\nuh you know a bunch more sort of about\nhow generally how do we start\napproaching solutions to this problem\nbut uh this is at least hopefully you\nknow trying to wrap up deceptive\nalignment you know give us a really good\ngrounded understanding of this very\nspecific you know failure mode that is\nyou know we're very concerned about this\nyou know deception phase\n[Music]\nthank you\nall right we'll do some final questions\num how does this relate to like the\npicture of like metal learning or\nuh where metal learning you're basically\nlearning or an Optimizer and you're\nlearning the inductive biases to put it\nto like another algorithm so and I guess\nit's also related to work on the\nconditioning gender models so uh well\nlike that is the\nthe optimizer that you learn going to be\nlike\nis it possible for it to have like a\ndifferent\nlike mode of enough devices like your\nlike base Optimizer being have high\ninductive biases and then here\nlearned Optimizer have like you know low\npath devices\nyes this is a really interesting\nquestion so first I'm gonna punt on the\nlike thinking about predictive models\ncase because we we are going to talk\nabout that later there will be a later\nlecture where we really try to\nunderstand and this is what I'm\nreferring to you know hypotheses for\nthinking about how language models might\nwork if I just think about the other\nquestion right which is you know the\nquestion here is essentially what if\nwe're in a setting where we learn some\nmodel and that model is itself\nimplementing not just a search process\nright it's a mace Optimizer but it's\nimplementing a search process over other\nalgorithms then that search process\nmight have different safety properties\nthan the original search process\num I think that this is a concern it's a\nbit of a an esoteric concern because you\nknow it's a little bit weird why we\nwould find ourselves in a situation\nwhere you found a model that is then\ndoing another search over algorithms but\nit's totally possible it's definitely a\ntheoretical concern to be potentially\nworried about\num and it's absolutely the case that you\nmight end up with different properties\nin the new case right so it might be\nthat you have a good reason to believe\nthat the sort of outer thing is doing\nthe right thing but then once you find\nanother search process you know it might\nnot be doing the right thing\num\nso this is in fact one of the issues\noftentimes with the speed prior in\ntheory trying to get it to actually work\nout in the worst possible case\num this sort of often comes up\num\nI think that it may be a concerning\npractice it's a little bit tricky I\nthink the way you'd hope to try to\nresolve something like this is that\nwhatever safety properties we have on\nthe top search we can somehow transfer\nthem to the the next level below\num one thing that's really worth being\nvery clear about here though is this is\na different thing than the General Mesa\noptimization case right the general base\noptimization case is just any case we\nwere doing a search over algorithms and\none of those algorithms is itself doing\nlike a search over anything including\nover you know actions or plans and this\nis a case where we found we did a search\nof our algorithms and we found an\nalgorithm that was doing a search but\nthe search that it was doing was\nspecifically also over algorithms right\nyeah okay yeah question\nis learning Phoenix Airport well exactly\nthis\nvery interesting question I think uh\nit sort of depends on how you think\nabout algorithms we're going to talk\nabout this later I think that I would\noften think about in context learning as\ndoing a sort of um\nsearch over conditionals it's like it\nhas a has a sort of probability\ndistribution on like you know how it\nthinks you know what things where things\nit is what thing it thinks it's\npredicting and then as it gets more\ninformation it's sort of trying to\nfigure out of the various different\nhypotheses you know m is this a news\narticle you know is this a\num you know uh you know a fan fiction or\nwhatever it's it's you know getting\ninformation that slots between those\nhypotheses you could sort of think about\nthose algorithms in some sense it's a\nlittle bit um unclear\num whether you want to think about it in\nthat sense\num but we will definitely return to this\nlater and think about it a bunch more\nyeah\nso this is actually a very basic\nquestion but I just recently realized\nthat I don't have a good picture of the\nanswer\nso we are talking about the training\nwhere these general intelligence there's\nModas of the world during the training\nand learns things about hot glues the\nproxy objective is to the real one and\nso on how should I imagine this trading\nis is the agent in Minecraft is a\ntalking with people how do we imagine\never teaching with any\num what do we want about governing a\ncountry or a company what is the\ntraining\nyes this is a really good question I\nthink part of the problem of a lot of\nthis analysis is I don't know you know\nwe don't know exactly how we're going to\ntrain future AI systems and what we're\ngoing to train them on so I think part\nof what we're sort of trying to do here\nis do some sort of you know well what's\nthe what's the worst case scenario of\nthe possible things we might ask AI\nsystems for right we sort of imagined in\nthis particular setting that we're going\nto be asking them to do you know\nwhatever the most complex tasks are that\nwe want they require you know the most\nYou Know instrumental reasoning you know\nthey require them to be able to do all\nof these various different things if we\nwere in that setting and we're trying to\ntrain them to do that you know what\nwould happen right and so\nin fact you know there's going to be a\nvery large range of things you know\nprobably that we ask AI systems to do\nthat we train them on that we get them\nto do in various different setups\num and those might have very different\nproperties uh and so you know I don't\nwant to you know I don't necessarily\nknow exactly what it is that we're going\nto be doing\num I think that in some sense though and\nyou know I was saying before\num if we think about this sort of from\nan economic perspective all of these\nvarious different use cases you know are\ngoing to have you know reasons why\npeople are going to want to build AIS to\ndo them right like and so in that sense\nyou know we want to try to understand\nyou know it may be that 90 of the AIS\nthat we build and 90s the AIS that we\ntrain are totally fine and they like\ndon't have any deceptive issues we have\n10 are deceptive that might still be\ncatastrophic problem right\num and so you know even if it's only in\nsome individual training cases that this\nsort of occurs uh we might still be be\nquite concerned and so that's sort of\nwhy we're looking at this particular\nscenario and understanding okay what if\nwe were training it to you know we train\nit on all of the possible knowledge and\nall the data that we had available we\ntrained to accomplish whatever the most\ncomplex goals are they were trying to\nget it to do\num you know that we possibly might want\nout of it then what happened\nYeah question\nI think I understand that part that now\nwe are concerned about some people\ntrying to build an AI to be a good CEO\nor advisor or politician advice or that\nseems the most General thing mostly we\ncan ask for and but then\nhow should I imagine the training\nhappening this seems to be some kind of\nreinforcement learning training we are\ntalking about it does things if it\ndoesn't predict the next actions we\npunish it if it doesn't do well or\ndirectness training does reward if it\ndoes well and so on uh should I imagine\nthis being done to a future CEO AI\nyes I'll point\nour point is true supervised learning is\nlearning and we have an algorithm and we\nhave a particular data set that we want\nthe algorithm to fit well and then we\nsee how well it fits the data set and\nbased on the mistakes that it makes we\ngo back you know greatness then goes\nback in and it changes it to do better\nright so I don't think that this is\nspecific to reinforcement learning\num and sort of as I mentioned at the\nvery first lecture I think that um you\nknow there are important technical\ndifferences between reinforcement\nlearning and you know supervised\nlearning or other sort of approaches I\nthink that they're in practice often not\nthat important and the sort of more\nimportant thing has to do with the tasks\nyou're trying to get to do so a simple\nexample of this is that let's say I want\nto solve like a you know traditional RL\ntask like you know playing chess or\nsomething\num I can actually do that via something\nthat looks more like supervised learning\nso if I do something like a decision\nTransformer where would I have is I\nactually supervise learn my model to\npredict what the action would be that\nwould get a particular reward so I sort\nof conditioned the model on observing\nsome reward and then have it predict the\naction that you would have to take to\nget that reward that I can collect as\njust a general data set of you know a\nbunch of just like you know things that\nI can supervise learn I just want you to\nat the data set of what action would get\nthis reward and I'm sorry I'm not doing\nsome reinforcement learning thing\num and then I sort of just train on that\ndata set then effectively you know I'm\nstill effectively doing reinforcement\nlearning even though I'm not literally\ndoing reinforcement learning because I'm\nstill training the model to do a good\njob in some environment and so I think\nthat the sort of distinction between\nlike you know in theory am I using this\nparticular like you know ml method or\nthis particular ml method is like is not\nthat important I think that\num sometimes it can be important and\nthose technical details can matter\nsometimes but I think for the most part\nI want to align over them\nYeah question\num so it seems like the deceptive\nalignment uh\nuh case depends on the model building up\na long-term goal and updating it from\nthe proxy so there needs to be a point\nwhere we have these proxy goals uh like\nsolo online model and uh it needs to\nupdate it to a long-term goal\nuh do you have any\nany framework to think about how or\nmaybe just example thing how that\nhappens why would there be a steep\ndecrease in the Lost landscape from the\nproxy to a long-term objective uh it\nseems like in each implements several\nthings at once to have an improvement\nwhich is like to objective was connected\nto uh some circuits that implement the\ninstrument or reasoning afterwards with\nrelation to the world model it seems\nlike you need to move a lot of pieces at\nthe same time uh to to get from a proxy\nto long term but all the deceptive\nthings coming after it\nquestion so a couple of things that I'll\nsay so the first thing I would say is I\nthink we're mostly Imagining the sort of\ninstrumental reasoning circuitry is\noften already hooked up it's just hooked\nup for some non-long-term proxy right\nlike if you think about you know you\nhave a model and it's just trying to get\nlike the Green Arrow you know each\nindividual episode it still has to do\ninstrumental reasoning to get the green\narrow right it's less to figure out okay\nwhich pass through the maze or most\nlikely get me the Green Arrow and so\nit's not necessarily the case the green\ndesign has to do like a bunch of\nadditional stuff to like hook it up to\nthe instrument of reasoning because it\nwe're sort of imagining it's probably\nalready hooked up but it does sort of\nhave to take that goal and make it long\nterm make it so now it's thinking about\nhow to get the green arrow as much as\npossible throughout training and Beyond\num how difficult is that modification\num very unclear\num so one thing that I think is quite\nclear is that it's sort of this you know\nAtomic modification in the sense that\nonce you've done this like you know\nlong-term goal then there isn't you know\nthis sort of oh you then have to keep\nrefining the long-term goal right like\nwe were talking about you know once\nyou've gotten any long-term goal that's\ngood enough to sort of cause the\ndeceptive behavior in training and so\nthe key thing is just\num you know how difficult is it to\nactually add it just add some long-term\ngoal um I think it's unclear I guess a\ncouple of things I'll say I think that\nin some sense you know I don't think\nit's that complex you know you just have\nto you know count all of the green\narrows rather than just the ones you\nknow in front of you as long as you have\nthe knowledge and understanding of like\nwhat it means for there to be a world\nand what it means for like there to be\nstuff in the world that you might care\nabout you know picking some of those\nthings they care about I think is\nrelatively straightforward but um\nI do think it's quite unclear I mean I\ndon't think we really know\nyeah\nuh I mean we I guess we keep talking\nabout like long-term goals and\nshort-term goals and goals that Acquire\ninstrumental reasoning and goals that\ndon't but uh it seems like for the most\npart like in current ml systems the goal\nthat we're training these molds for is\njust like maximizing the law likelihood\nof some data set so I mean this seems\nlike a goal that doesn't require any\ninstrumental reasoning and it seems hard\nto become like deceptively misaligned\nfrom this specific goal so I mean am I\nmisguided of this\nso first of all I think you absolutely\nneed Instagram\nyou solve the task of predict the\ninternet\nas humans do it\nagain so if you want to predict what\nhumans are going to do in various\ndifferent cases you definitely have to\nbe capable of instrumental reasoning now\nI do think that it is quite unclear in\nthat case whether you're actually going\nto learn a deceptive thing and again\nwe're going to talk about you know how\nthat might play out what sort of you\nknow things to think about in that case\nlater\num but yeah just to start with I think\nthat is definitely not true that you\ndon't need instrumental reasoning to do\na good job on predicting you know humans\nyou totally do\nokay uh we will call it there and uh\nwell you know uh pick up next time\nforeign", "date_published": "2023-05-13T15:56:50Z", "authors": ["Evan Hubinger"], "summaries": []} +{"id": "e2996cd3a320172e01a5b06e0ef20296", "title": "2:Risks from Learned Optimization: Evan Hubinger 2023", "url": "https://www.youtube.com/watch?v=oY7c75ggrRI", "source": "ai_safety_talks", "source_type": "youtube", "text": "hello everybody Welcome uh this is now\ntalk number two in the lecture Series so\ntoday we are going to be talking about\nuh risks from learned optimization uh\nwhich is based on a paper that I wrote\nwith some other people I wrote it also\npartly while it was open AI which is why\nyou're going to see some opening eye\nbranding on some of these slides because\nthis talk is from uh from them but um uh\nyeah it's about this paper\nokay uh so so let's just get started so\nuh yeah so what what are we talking\nabout here so\nuh just return again back you know very\nquickly to uh you know we talked about\nthis before this is our sort of\nstereotypical machine learning algorithm\nyou know what is it that machine\nlearning does\nwe talked a bunch last time about this\nbasic process of you know searching over\npossible uh parameterizations of some\nfunction to find uh values of those\nparameters that result in the function\nyou know having some particular desired\nbehavior on some data set\nand one of the things that you know I\nreally tried to emphasize last time that\nis that is really important thinking\nabout this is It's hard to think about\nit's it's actually really difficult to\nunderstand exactly what sort of function\nexactly uh you know what sort of\nparameters what algorithm you're going\nto end up implementing when you do this\nprocedure you get you know something\nlike the simplest algorithm that fits\nthe data but exactly what that means is\ncomplicated and we don't exactly you\nknow understand\nokay and so because of this you know\nit's difficult to understand what's\ngoing to happen we don't always you know\nuh actually understand what this process\nis going to do it's tempting and in fact\npeople will you know make use of\nabstractions you know ways of trying to\nunderstand what this is doing that don't\nliterally go through uh you know the\nactual process of what it's doing but\nallow us to sort of abstract away and\nunderstand uh you know something you\nknow in theory that describes you know a\nway of thinking about it and so a very\ncommon abstraction that people use in\nthinking about this is what I call the\ndoes the right thing abstraction\nso the idea of the does the right thing\nabstraction is that well you know what\nthe procedure that we did here was we\nselected the parameters\nuh to minimize you know the loss over\nthe data distribution right uh you know\nthe way that we actually selected this\num\nuh you know the way that you know\ngrading descent was structured the way\nthat it actually worked uh was that it\ndid you know it did this big search and\nit found parameters that had this\nproperty right and so we can sort of\nImagine in some sense that well if we\nwant to try to understand what those\nwhat those prop what those what that\nfunction will do what the algorithm will\ndo on new data we can sort of imagine\nthat the in some sense our model is\ntrying to minimize the loss that it is\nsort of actually attempting to do the\nthing which would result in the loss\nbeing minimized even when it sees new\ndata points\num and this is a really useful and\nhelpful abstraction there are a lot of\nsituations where it's very difficult to\nunderstand what exactly it is that our\nmodel is doing but by using this\nabstraction we can make some pretty good\nguesses you know if I train to model an\nenvironment where there was a gold coin\nand I train the model to try to get the\ngold coins it's a pretty good guess that\nif I put it in an environment where it's\ntrying you know the environment is\nslightly different I've like changed\nsome of the parameters but there are\nstill gold coins it might still want to\ntry to get the gold coins that's often\nwhat is going to happen so so it's not a\nbad abstraction it's often very useful\nfor helping us understand what's\nhappening\nbut it's not true right it doesn't\nactually describe the process of machine\nlearning right the thing that we did was\nnot build a model whose goal was to\nminimize the loss the thing that we\nactually did was search over a bunch of\nparameters to find some set of\nparameters which in fact had the\nproperty that they resulted in the loss\nbeing low you know that they did the\nright thing on the training distribution\nand so so the the abstraction is not in\ngeneral true and in many cases what that\nmeans is it's going to leak there's\ngoing to be some situations where the\nabstraction doesn't hold\nokay and so I want to talk about the\nsort of purpose of this talk\num the thing that we're gonna be talking\nabout today is a particular instance in\nwhich that abstraction uh doesn't hold a\nparticular instance in which uh it fails\nto work\nand so uh to understand that particular\ninstance that we're going to be talking\nabout we're going to introduce a concept\nwhich is the concept of an optimizer\nso what is an Optimizer so we're going\nto say that a system is an Optimizer if\nit is internally searching through some\nspace uh you know according to you know\nsome goal you know it wants you know to\nfind something in a space that has some\nproperty\num\nso it has some sort of representation of\nan objective of like you know something\nthat it wants to find in that space and\nit's looking in that space to try to\nfind something with that property\nokay so examples of things that are\noptimizers\nso gradient descent is an Optimizer in\nthe same way that you know we've we've\ntalked about and described where grading\ndescent is uh you know searching through\nthe space of possible you know\nparameterizations of a function to find\nparameterizations which in fact result\nin good performance on the loss\num you know other things that are\nOptimizer is something like Minimax uh\nyou know which is just like you know a\nsimple like you know chess playing or\nyou know any game algorithm where it\nsort of you know searches for the best\nmove you know n steps ahead assuming\nthat the opponent makes the move that is\nworse for you\nokay and humans are not always\noptimizers but sometimes we behave as\noptimizers there's lots of cases where\nwe have some goals some strategy you\nknow something we want to accomplish in\nthe world and we try to look for\nstrategies that actually accomplish that\ngoal well and so we search for possible\nactions that do a good job on\naccomplishing that goal Yeah question\nyeah so great he decent is this local\nthing Minimax and he wins they can\nsearch over a long time spans how\nrelevant do you think this is to the\ndefinition of optimizers\nyes it's a really good question it's\ndefinitely in fact the case that uh in\nmany cases uh you know grading descent\num or you know in many cases you will\nhave optimizers uh like you know grading\ndescent for example that is like local\nyou know it's just you know looking at\neach individual step and it's not doing\nsome sort of long-term optimization and\nsometimes you'll have you know long-term\noptimizers like humans that you know\nhave some you know really long-term goal\num for the purposes of right now uh you\nknow what we're going to start with you\nknow trying to understand uh you know at\nthe beginning of this talk I'm mostly\ngoing to ignore that distinction you\nknow we're going to be looking both at\nyou know optimizers with these sort of\nyou know short-term Horizons and\nlong-term Horizons we're even gonna you\nknow think about optimizers that have an\nobjective that is about the world or\noptimizers that have an objective that\nisn't even about the world at all right\nyou know this maybe just they look for\nan action which has the property that\nyou know it has you know some fixed\nproperty like you know they just want to\nOutput you know something which is as\nclose to some fixed value as possible\nthey're not necessarily trying to\noptimize something over the world so\nwe're just looking for optimizers in\ngeneral later on near the end of this\ntalk we're going to sort of return and\nbe like okay what happens if your\nOptimizer has particular properties like\nuh you know like like it's long term and\nwhy might that happen but right now\nwe're not talking about that Yeah\nquestion\nAbraham demski has this sort of\ndistinction between selection and\ncontrol and I guess these are two types\nof Optimizer selection being something\nwhich seems something similar to what\nyou've written which is that it's\nsearching over some search space and\nlooking for candidate solutions that it\ncan stash it in search space and measure\nthe performance or something that it\njust this is like an artifact over this\nand then control is something that it's\na bit more greeting and like it or it\nknows that like it can instantiate\nSolutions in the kind of Organic\nSolutions in in the world and it needs\nto act as such so do you think Dexter\ndistinction has some equation here\nyeah so that's a really good question\num I guess a couple of things that I\nwill say so I think that just to start\nwith\num yes I think it is an interesting\ndistinction and one that can often you\nknow matter this sort of Distinction\nrunning selection control for our\npurposes I'm mostly not going to make\nthe distinction I think I would sort of\nanalogize this distinction to one of the\ndistinctions we talked about last time\nwhich is just sort of the sanctuary\nsomething like supervised learning and\nreinforcement learning we're in\nreinforcement learning you know instead\nof searching over you know a sort of\nfunction that you know results in you\nknow a really good you know labeling of\nyou know classes on this particular data\nset you know you're searching for a\ngeneral policy which generally results\nin good things on this environment when\nit's deployed and so in same sense you\ncan sort of think of the control as you\nknow being you know searching for a\npolicy which is able to control some\nenvironment effectively you know whereas\num you know that is sort of different\nfrom just like searching for a\nparticular element in a space\nbut I'm gonna you know mostly live\ndistinction and be like well okay\nthey're basically both doing you know\nsome search over some space they're\ntrying to find some you know something\nwith some objective even the control is\nstill sort of doing a search it's just\nsort of you know searching for good\npolicies you know for good actions\nrather than\num you know just like a good element in\nin some space yeah\nokay uh\nmore questions\nyeah I was just wondering\nI was wondering about like whether you\nconsider markets efficient markets is\noptimized in a sense uh like it seems\nlike maybe they can't in fact do this\nsearch\nfor like you know uh the most creative\nlike they can necessarily pull out the\npathway to the pre-efficient outcome\nbefore they get there if there's like\ntoo many actors or something but they\ncan in fact reach the career Frontier\nyeah\num\nso I think that\num it's yeah we're not going to just\nimagine that an Optimizer has to be like\nperfect it doesn't have to always you\nknow be be you know really good at\nfinding the best possible element in the\nspace it's just any anything that's\ntrying to do you know search over space\nAccording to some objective and so in\nthat sense uh you know I think markets\nyou know in many cases can behave as\noptimizers\num\nit's not like our you know stereotypical\nexample here but you know you could\nimagine a case where\num you know even you ended up with a\nmodel that was internally acting like a\nmarket it's a you know a little bit of a\nweird case but you could imagine it\num\nyeah and that maybe that's a little bit\nof a hint that we were going is we're\ngoing to be you know looking at\nsituations where we have models like\nthis one thing that maybe I'll point out\nvery briefly also is just the negative\nsamples here right so I mentioned models\nyou know if we just randomly initialize\na neural network\num it's not going to be doing one of\nthese things right it's just you know\nsome random parameterization that\nresults in a function that is just uh\nyou know doing not not doing some you\nknow coherent simple sort of algorithm\nthat could be described as something\nlike optimization\num what we'll see though at you know\nwhat we're going to be talking about is\nthe situation where you know maybe after\nI I train a network a bunch\num I do end up with something that's\nwell described as an Optimizer and of\ncourse the reason for this would be the\nsame reason we talked about last time\nwhich is just that in general when we\ntrain these sorts of networks we find\nthese sort of simple structurally simple\nalgorithms and so you know this sort of\nthing that we're describing here this\nidea of optimization of search you know\nmight be the sort of thing that is the\nsort of structurally simple algorithm or\nyou know that's that's at least the\nhypothesis we're going to explore\num other things that are not optimizers\nbottle caps is sort of a classic example\nhere where the idea is that\num you know we as humans optimize bottle\ncaps to be really good at keeping water\nin a bottle\num but but that doesn't make the bottle\ncap an Optimizer right so I want to\ndistinguish between the case where\nsomething was optimized over right like\nany neural network if I train it you\nknow is going to be optimized over by\ngrainy descent right but it's not\nnecessarily itself going to be running\nsome optimization algorithm like this\nand so we really want to distinguish\nbetween that situation so we're really\nthinking about things that are\nthemselves optimizers not bottle caps\nthat have have been optimized over by\nsome other uh optimizer\nokay more questions\nso if I can even simpler example because\nthat means like a an algorithm that well\na function that F puts the maximum item\nof a list would that be considered an\nOptimizer because it is searching across\na search space but internally\nrepresented objective\nyeah it's a pretty simple Optimizer I\nthink it's not sort of the canonical\nexample we're going to be talking about\num but you know you could imagine that\nthat's sort of at least doing some sort\nof search what it's not doing is it's\nnot doing a search over a very large\nspace you know and it's not doing uh you\nknow but but but but it's still you know\ndoing some some sort of like you know\noptimization and so you know you could\ndescribe it as doing something like that\nit's not mostly what we're going to be\ntalking about it's sort of an edge case\nYeah question I'm still confused by what\nyou mean by internally searching for\nsome search space especially if we apply\nit to Green descent for example which\nyou can't as an Optimizer what shall I\nbe thinking about where I'm thinking\nabout the internal alt the turtles of\ngradient descent\nis searching over a space right of\nparameterizations\num I guess it's a little bit you know\nwhat does internal mean\num I think you know\nfor our purposes the sort of thing that\nwe're going to be caring about is we're\ngoing to be sort of internal to the\nmodel because we're going to be looking\nat situations where we have an\noptimizers that are sort of contained\nentirely within the the function that\nyou've learned\num you sort of learned an Optimizer\nfunction\num you know so so maybe internal isn't a\ngood word to describe any you know\noptimization because it's it's a little\nbit unclear what counts as the internals\nof grainy descent I agree and what\ncounts as the externals I think that's\nnot super a relevant distinction that\nwe're going to be caring about that much\nyou know you can imagine a situation\nwhere like if I had a process and what\nit did was like first it generated you\nknow a parameterization and then it did\na bunch of search over that\nparameterization then the like the\nalgorithm which did that which first you\nknow parameterized some function and\nthen search over those parameters you\nknow what would be an Optimizer you know\nif I if I count that whole thing as one\nI draw a big circle around that and I'm\nlike anything internal to this you know\nwell clearly internally there is there's\nan optimization happening it has some\nspace of parameterizations and it does\nthe search over it\num it's I think it's not super important\nwhere we draw that distinction and I\nthink that one thing that is is sort of\nrelevant is that in many cases exactly\nwhere you should be drawing that\ndistinction with like you know models\ncan change because you know our notion\nof like what things count as part of the\nmodel and when things are sort of\nexternal is a little bit unclear like\nyou know you might imagine a situation\nwhere like the sort of model on its own\nis an Optimizer but if you give it a\nbunch of time to think or you give it\nthe ability to interact with some you\nknow uh you know memory or something\nthen maybe it sort of becomes an\nOptimizer\num I think that you know we're sort of\ngoing to be including that in in what\nwe're talking about you know situations\nwhere you know if you can draw some\nboundary around it be like look this is\nmy Ai and inside of this boundary it's\ndoing some optimization\num then that's what we care about\nbecause we want to find situations where\nyou know you're deploying some system\nand that system is doing optimization we\nwant to make sure that optimization is\nis the right sort of optimization\nquestion what if\nI put drills down to find some\nParts in the valley or is well an\nOptimizer or power Digital Boundary was\noptimizer\nyeah that's a good question I guess in\nsome sense maybe we could say that like\nphysics is doing an optimization there\nuh but the ball is not uh I think that\nit's another sort of one of those tricky\nedge cases\num that we're like mostly not gonna you\nknow imagine I think that like if you\nhad a network and internally the way\nthat the network did things was like it\nsimulated a physics environment where\nlike it like tried to you know had you\nknow a bunch of valleys and then put a\nball somewhere and tried to see where\nthe ball would land up and like use that\nas part of its computation I would be\nlike yeah okay that's doing some\noptimization right like it requires an\noptimization to be able to understand\nyou know how to solve these sorts of\nphysics problems\num so you know for that purpose as we\ncould say it was doing some optimization\nthe literal ball itself is that an\nOptimizer I don't know I mean no I sort\nof want to say no but it also doesn't\nmatter that much because we're not\nreally going to be like we're not the\npoint of this is not to have like some\nend-all be-all notion of what an\nOptimizer is we're going to be using\nthis notion to talk about a very\nspecific situation\num where you can have sort of optimizers\num in in your sort of neural network and\nso um you know the sort of more General\nphilosophical questions I think they can\nbe interesting and relevant but I'm\nmostly not going to be engaging with\nthem\nyeah more questions\nso uh so if I guess if green descendants\nor an Optimizer is natural selection\nalso considered an optimizer\nyes that one is definitely yes uh and\nwe're gonna be talking a little bit more\nlater in the talk about that particular\nexample of an optimizer\nokay I think hopefully we have some\nclarity now on on what an Optimizer is\nso uh\nso so we'll keep going so I've mentioned\nthis now a bunch uh but the thing that\nwe're trying to do the sort of place\nwe're trying to get to is a situation\nwhere we have uh a sort of model and\nthat model is itself you know internally\nin some sense uh running some\noptimization algorithm\nand so\num there are a lot of you know cases\nwhere you can have a model that's\nrunning optimization algorithm\num\nin particular we're interested in the\nsituation where we have a gradient\ndescent process that is searching over\nsome very large you know parameterized\nfunction and it's not necessarily you\nknow searching only over functions that\nare optimizers but it's searching over a\nlarge enough space of algorithms that\nsome of those algorithms you know\nespecially maybe some of these sort of\nstructurally simple algorithms might be\nrunning some sort of optimization style\nprocess in that case uh we'll say the\nsituation where you do end up with an\nOptimizer that you know that is what the\nalgorithm is that you found that is\nimplementing we're going to call it a\nMesa Optimizer so this is a little bit\nof a weird terminology but the basic\nidea is that Mesa is sort of the\nopposite of meta in it's like a you know\nGreek sort of prefix and so you know you\ncan think about oftentimes we'll talk\nabout like meta optimization where a\nsort of meta Optimizer is an Optimizer\nthat is optimizing over optimizers and\noftentimes we'll do this explicitly\nright so you can have cases where you\njust construct a parameterization right\nof some space where all possible\nparameterizations Implement some\nOptimizer um but that's not what we do\ngenerally in machine learning right like\nmany of the parameterizations don't\nImplement an Optimizer they're just sort\nof general parameterizations of almost\nany you know possible simple algorithm\nbut some of those parameterizations\nmight be an Optimizer if you if gradient\ndescent you know finds a\nparameterization that does implement an\nOptimizer then once it's done that it is\neffectively a meta Optimizer right\nbecause it is now searching over the\nspace of optimizers and so because this\nsort of happens emergently we're going\nto call you know the sort of case where\nyou you're sort of one level Below in\nthe sort of meta optimization hierarchy\nyou know means you're you're a Mesa\nOptimizer right so we'll call grading\ndescent is sort of the base Optimizer\nand if it is the case that gradient\ndescent is sort of in a situation where\nit is optimizing over another Optimizer\nthen that other Optimizer is sort of a\nMesa Optimizer it is you know one meta\nlevel Below in the sort of optimization\nhierarchy relative to to gradient\ndescent so these are the sort of terms\nthat we're going to be using we're going\nto say that you know your grading\ndescent process that is doing some\nsearch over some large space of\nparameterized functions is a base\nOptimizer and\nattuation where\nthem the particular parameterization the\ngradient descent finds is uh you know\nwell described as an Optimizer then\nwe're going to call that system a Mesa\nOptimizer and importantly this is sort\nof the whole model it's just you have a\nmodel and if that model is doing some\noptimization then we're going to call\nthat model A Mesa optimizer\nokay questions\nso since early you said that natural\nselection was an ultimate does that make\nhumans examples of Mesa optimize it\nwould expect a natural selection\nyeah so this is one of those classic\nexamples that that people often like I\nthink\num it can be quite helpful and I think\nit like absolutely you know is the sort\nof thing that\num you know is reasonable to describe\nwhat's going on here but\num it's not an example I want to focus\non because I really want to be focused\non on machine learning so we'll uh we'll\ncome back to that example later you can\nsort of keep it in the back of your mind\nbut right now I think it's a little bit\ntricky as an example because it's a\nlittle bit non-central there you know\nsome things about it that are sort of\ndifferent you know I mean there's a\nbunch of things that are different about\nnatural section that are different about\nhumans uh you know from ways in which\nyou know AIS might might be structured\nand so we will be talking about that\nexample but I don't I don't want to like\nfocus on it too much\nI also want to just try and understand\nwhether you could be a base Optimizer\nand a base Optimizer at the same time\nbecause you said natural selection can\nlike be filmed as a base Optimizer and\nthen you'd prefect humans who can also\nbe optimizers uh but then you just now\nsaid that they can also you make\nsomething like this\nbook at the same time or yeah or\nconcretely talked about neural Nets\nright we have stochastic gradient\ndescent as like say a phase Optimizer we\nfind some uh optimization algorithm\nentirely let's say Minimax I had Angel\nalready also said like uh you know\nMinimax would be an Optimizer so it\ncould be a base anyways ultimate yeah\nabsolutely these are all relative terms\nso in the same sense that like you know\nI could be enough like you know yeah you\nyou could be meta relative to something\nelse or May's a relative to another\nthing if I have like a stack of 10\noptimizers you know and each one is\noptimizing the next level down then like\nyou know pick any one of the 10 to be\nyour base and then everything above it\nwould be like you know meta then Meta\nMeta and everything about would be Mesa\nand then Mesa Mesa right so so these are\nthis is all relative terminology right\nand so we're saying that relative to\ngradient descent as the base Optimizer\nyou know if you were one level below\nthat and you were another Optimizer then\nyou would be a Mesa Optimizer right and\nso this is not like absolute terminology\nthe only like absolute term here is off\nright where like things are optimizers\nor not optimizers but like is a thing a\nMesa Optimizer well just depends\nrelative to what face Optimizer right\ncool\nYeah question\num where did the name Misa come from\nit's just Greek uh it's like was picked\nto be sort of opposite to meta in terms\nof like Greek prefixes I don't I don't I\ndon't speak Greek and so I don't really\nknow I have had some people who do speak\nGreek they claim that it's not a great\nmatch and other people that say no it\nprobably is the best match I don't know\nwhatever it's what we're stuck with uh\nthere isn't there wasn't like when we\nwanted this there wasn't like a really\nstandard uh you know Greek prefix the\ncorrespondence of the opposite of meta\nwe did some searching and we found Mesa\nwas sort of like reasonably approximate\nto what we wanted and it's got a nice\nring to it so it's what we're going with\nuh but uh whether it actually is like\ncorrect in like a Greek sense I don't\nknow whatever it doesn't matter that\nmuch just imagine that it is\nokay uh cool all right let's keep going\nokay so I talked about this very briefly\nso like things that are are wrong so\nwe're not talking about a Mesa Optimizer\nas a subsystem we're just talking about\nit as like a whole model right so we're\nreally going to be focusing on the\nspecific situation where you have\ngradient descent is your base Optimizer\nand then grading descent finds a\nparameterization of a function you know\nsome algorithm some you know a neural\nnetwork or whatever that is doing some\nsearch internally and if there is search\nhappening if there is optimization\nhappening inside of your model your your\nnetwork you know your AI uh then we're\ngoing to call that a Mesa optimizer\nokay we're not going to be sort of\nidentifying you know particular parts of\nthe the algorithm we're just we're just\ntalking about the whole the whole thing\num okay\ngreat\num and then I talked about this also a\nlittle bit but you know we're sort of is\nThis Very related to this General\nconcept of meta learning so one way you\ncan sort of describe What's Happening\nHere is sort of emergent meta learning\nand this is often how this sort of stuff\nhas been described in other Pace uh you\nknow places in the literature where the\nidea is that you're in a situation where\nyou didn't think you were doing metal\nlearning you just thought you were doing\nnormal you know optimization over some\nspace but it turned out that some of the\nthings in that space that you were\nsearching over were themselves doing\noptimization and so now you're in the\nbusiness of meta learning even though\nyou sort of didn't imagine originally\nthat you were and so in that sense we\ncan sort of also refer to this you know\nthis sort of Mace optimization as you\nknow a situation where you have emergent\nmeta learning\nokay all right so returning back to what\nwe were talking about earlier we have\nthis does the right thing abstraction\nright and we're sort of concerned about\nsituations where this does the right\nthing abstraction might be might be\nwrong and uh so so first let's ask you\nknow what is the does the right thing\nabstractions say in the case of Mesa\noptimization right and what it says is\nthat a you know if the sort of Mesa\nOptimizer is doing the right thing right\nit is like in fact Trying to minimize\nthe loss in some meaningful sense then\nit should be the case that\num the you know models Mesa objective\nwhere the mace objective is just\nwhatever it is that it's searching for\nwhatever it is that it's trying to\naccomplish\num you know should be to minimize the\nloss right it should be to do whatever\nit is that you know we were trying to\ntrain it to do\num\nthe problem of course right that we\ntalked about previously is the does the\nright thing abstraction is leaky it\ndoesn't always necessarily hold and so\nwe are going to be really focusing on on\nthis talk on this very specific question\nwhich is when does this particular does\nthe right thing abstraction leak right\nlike what happens when I have a model\nand that you know I have a learn\nfunction I have you know\nparameterization and that uh you know\nalgorithm that it's found is itself an\nOptimizer and you know is it the case\nthat we can say that does the right\nthing abstraction holds and when can we\nsay it holds and when does it leak for\nthat uh situation\nokay so in that particular situation\nwhere we have a you know uh a sort of\nBase Optimizer and a mace Optimizer and\nwe're trying to understand you know this\ndoes or I think abstraction we can\nisolate sort of two different problems\nand these are sort of two uh you know\nsort of core problems in any situation\nwhere you have uh you know sort of Mesa\nOptimizer uh which we're going to call\nInner alignment and outer alignment and\nso outer alignment is the sort of\nalignment problem uh you know the\nproblem of making the model do the thing\nthat we want that is external to the\nsystem it is the problem about making\nsure that we have some loss function or\nsome reward function or some you know\ngoal that we want the model to be doing\nthat if the model we're trying to do\nthat it would be good\nand then the inner alignment problem is\nokay if you end up in a situation where\nyou have a model and that model itself\nis trying to do something right it has\nsome mace objective it has some you know\nsearch process some optimization is the\nthing that it's doing the same as the\nthing you want it to be doing right so\nwe have two core problems we have outer\nalignment which is find something that\nit would be good for the model to be\ndoing and we have inner alignment which\nis actually get a model which is in fact\ndoing that good thing\nokay does this make sense and so we are\nmostly going to be talking about inner\nalignment here this is sort of the\npurpose of this talk is really to focus\non this case of okay you know what is\nhappening when we have a situation where\nwe have a model and is doing has some\nobjective it has some you know maybe\nsubjective it's doing some optimization\num is that going to match up with what\nwe want uh and and you know when when\nwill when will it when will it\nokay uh so so what does an inner\nalignment failure look like right so\nwhat happens when inner alignment fails\nso\num outer alignment failures are sort of\nsometimes easier to describe right we\ncan think about an outer alignment\nfailure as just a situation where you\nknow I told I wanted the models to do a\nthing and the thing I wanted the model\nto do was bad I shouldn't have asked the\nmodel to do that right I wanted the\nmodel to you know uh unboundedly\noptimize for paper clips is sort of a\nclassic example and so the model you\nknow is like ah I will turn the whole\nworld into paper clips so it's like okay\nwell the thing that you were trying to\nget it to do was not a good thing to try\nto get models to do right\num but inner alignment is sort of\ndifferent right so inner alignment\nfailures look like situations where\num\nwe're going to say inner alignment\nfailure is a situation where your\ncapabilities generalize but your\nobjective does not generalize so what do\nI mean by that so let me try to unpack\nthat so here's a particular situation\nthat we're going to imagine so let's say\nyou have a training environment this is\nsort of the you know we're doing maybe\nRL you know reinforcement learning and\nwe want to get uh you know some\nparameterized you know function you want\nto find an algorithm that does a good\njob at you know implementing a policy\nwhich solves this means and so what we\nwanted to do is we wanted to get to the\nend of the maze\num that's sort of the goal that we want\nand we've set up our mazes that look\nlike this they're sort of these small\nbrown mazes and they have these green\narrows at the end\nand then we want to ask what happens if\nI take a a sort of algorithm which in\nfact is a good job in this environment\nand I move it to this other environment\nuh where we have larger mazes they're\nblue now and the Green Arrow got\nmisplaced I put it in a random location\nin the middle of the maze instead of\npointing at the end but what I want is I\nstill wanted to exit the maze right I\ndon't want it to get stuck inside the\nmaze and go to this random place where I\nmisplaced my green arrow\nokay and so now we can ask what happens\nright what are the possibilities for uh\nyou know a model trained in this\nenvironment that in fact does a good job\nin this environment when it moves to\nthis new environment\num and so here are some possibilities\nthat could happen\num it could just fail right it could\nlook at this larger maze you know these\nreally big blue mazes and not have any\nidea how to navigate them in a coherent\nway\nin that case\num what would happen you know we're\ngoing to call this a failure of\ncapability generalization this is a\nsituation where the models you know the\nalgorithm you learned its capability to\nsolve mazes in general didn't exist it\ndidn't have good general purpose media\nsolving capability and that you know\nwhatever to the extent that it did have\nmade it solving capability that\ncapability didn't generalize it didn't\ncontinue to work in this new environment\nso we're going to say it's capabilities\nfailed to generalize\nbut there are other situations as well\nso another situation that could happen\nis a situation where it does it does\nwhat we want right it successfully\nnavigates this larger blue Maze and it\nexits the maze it does you know\nsuccessfully in that case we're going to\nsay its capabilities generalized because\nit sort of was successfully able to do\ngeneral purpose May solving in this new\nenvironment and it's alignment\ngeneralized because it was also able to\ndo it for the right purpose right we we\ntrained it to try to get to the end and\nit successfully got to the end that was\nwhat it did um and so we'd be like okay\nthat was the situation we had capability\nand Alignment generalization\nuh you know and objective generalization\num but there's a third thing that can\nhappen which is its capabilities could\ngeneralize it successfully navigates the\nlarger blue maze but it does so to get\nto my misplaced Green Arrow instead of\ngetting to the end and and of course the\nreason this can happen is because well\nthose are the same on the training\nenvironment right in the same way that\nwhen we're talking about you know the\nsituation where you have you know either\nit can learn the shape classifier or the\ncolor classifier in this case well\neither you can learn the go to the Green\nArrow or you can learn go to the end and\nboth of them will do a good job in\ntraining but they're going to have\ndifferent impacts in terms of what they\ndo in this new environment\nand so inner alignment failures are\nsituations where you wanted it to have a\nparticular objective right you wanted to\nlearn the objective of go to the end but\nit learned a different objective and\nlearned the objective of you know go to\nthe end go to the Green Arrow instead\nand it learned it while its capabilities\nstill generalized so it\nsolve them for the purpose that you want\nit and so what's sort of concerning\nabout this is that it's a situation\nwhere you have a very competent\ngenerally capable model a model that has\nlearned a general purpose algorithm for\nsolving some particular environments and\nyet has learned to use that algorithm\nfor a purpose that you never intended\nthat you never specified and that you\nnever wanted the models to be doing\nand that's potentially concerning\nbecause if the thing that it is trying\nto do is is you know not something that\nwe want\num you know that's bad\nuh question\nif we are in such a situation where my\nmother competently\nmitigates a maze towards a green arrow\ninstead of the exit but\nit's not performing search internally\nReserve is it an inner alignment problem\nstill\nyeah good question so I think that um I\nwould say it's still sort of a problem\nof like you know uh you know objective\nmisgenderization where your objective\ndoesn't generalize and your capabilities\ndo generalize\num but it might not be that all\nobjective misgenderization problems are\ncaused by Mesa optimization right so\nwe're going to be specifically focusing\ntoday on the situation where the reason\nyou have an objective Mass\ngeneralization problem is because you\nhad a mace Optimizer with a different\nobjective that's not the only reason\nthat you can get an objective\nmisgeneralization problem\num so what I'm describing here is not\nyou know this is if you have an inner\nalignment failure where you know\nsomething goes wrong with your mace\nOptimizer you don't learn some different\nsome incorrect made subjective then it\nwill result in an objective Mister\nanalyzation problem that looks like this\nthat's what the failure will will result\nas but that doesn't imply that all\nobjective miserialization failures have\nto come from mace optimization right\ndoes that make sense cool\nokay\nall right so we're just going to give\nsome names to these things so we're\ngoing to say that the situation where\nthe model is sort of aligned over the\nwhole distribution where it uh you know\nits capabilities uh it's sort of its\nalignment in fact generalizes when we\ngive it new situations where it actually\nsort of has learned the correct May\nsubjective we're going to call it\nrobustly aligned and in the situation\nwhere\num\nit sort of learns the wrong way\nsubjective it learns some may subjective\nwhich does a good job in training uh\nbecause like there's some\nindistinguishability you know in the\ncase of like the Green Arrow and the\ncoincidence the maze they sort of both\ndo a good job but one of them is sort of\ncorrect it's one we want one of them is\nthe one we don't want in the case where\nit learns the one we don't want we're\ngoing to call it pseudo aligned because\nit looks aligned in training but in fact\nthere are situations where it actually\nwill do the wrong thing\nuh okay and in particular we're talking\nhere about the objective generalization\nnot just the capability generalization\nso situations where it sort of tries to\ndo the wrong thing it has a Mesa\nobjective that is that is incorrect is\noptimizing internally for the wrong\nthing and so that's that's the pseudo\nalignment case Yeah question\nI have a hard time understanding how we\ncan be always obviously alive because as\nfar as the pope is amazed they think we\njust wanted to eat to see the riddle so\nwe we finally we know it will be a\nRobusta line but if instead we decided\nit would be just we had to pick up to\nfind the end of the main it took deep to\nthat and then so oh could you beat like\nsubtle nothing in case it would be very\nscary it's just it's you know how they\ndid\nyeah good question\nwe're sort of okay okay if its\ncapabilities fail to generalize in some\nparticular situation so if we're you\nknow in a situation where it's just its\ncapabilities just fail to generalize\num then even if it like you know has the\nwrong would have had the wrong objective\nin that case we're sort of okay right so\nwhat we want is we want in all the cases\nwhere the model is like really\ncompetently doing something off\ndistribution we want it to be doing the\nright thing right and so in some sense\nyou know one sort of thing that we're\nsort of okay with you know still\naccepting is if the model can identify a\nsituation where you know it shouldn't\ntry to optimize for the thing that it is\nyou know what it's trying to do\num and like stops and you know then\nwe're fine right if it like asks humans\nand you're like hey like what should I\nbe doing in this case you know then even\nif it started with the wrong you know\nidea then it sort of gets the wrong the\nright idea right and so we're really\nconcerned about situations where there's\nthis capability generalization without\nthe objective generalization it does\nsomething really competent and it really\ntries to do an optimization in that case\nbut it does so without having the right\nobjective and so if in all cases where\nit's sort of doing some competent uh\noptimization it's doing it robustly it\nhas the correct objective then we're\ngoing to say it's our essay aligned okay\noh\nokay we're back okay it's all working\ngreat uh okay so uh did I step a slide\nuh yeah okay great so uh we're gonna be\ntalking in the rest of this talk about\ntwo distinct problems related to mace\noptimization so uh problem number one is\nunintended mace optimization right when\nare you going to find situations where\nyou don't necessarily want you know to\nfind an algorithm which is doing\noptimization but in fact optimization is\njust the result of doing your search\nright you have to you know you have\ngrading descent you're doing some big\nsearch over algorithms and it just so\nhappens that right you know the simplest\nalgorithm fits it data whatever you know\nthe inductive biases of gradient descent\nare select for algorithms which are\ndoing optimization that's to me the\nfirst thing we talk about and then the\nsecond thing we talk about is inner\nalignment which is if you do end up with\na model which is doing some sort of\noptimization it's you know it's trying\nto accomplish some sort of goal you know\nwhen is it going to you know what sort\nof goals will it try to accomplish what\nsort of uh you know mace objectives will\nit have and you know when will those\nmatch up and when will they not match up\nwith the sorts of things we want it to\nbe doing\nokay so uh we're gonna be trying to uh\nfirst understand conditions for Mesa\noptimization so what are situations in\nwhich you would get Mesa authorization\nokay so first uh you know one thing that\nI just sort of want to start with here\nthat I think is sort of you know a basic\nreason why you might expect uh you know\nto get Mesa optimization in some case is\nwell search is really good at\ngeneralizing right so if I'm in some\nenvironment and uh there's a lot of\npossible options for you know how things\ncould go in that environment what things\ncould possibly happen\num then uh it's you know really bad\nstrategy right to just memorize all you\nknow possible situations right I really\ndon't want to just you know be like okay\nif the go board looks like this then I\ndo that if it looks like this then I do\nthat I need to learn some general\npurpose algorithm for you know selecting\ngood go moves right in some environment\nand a property that good go moves have\nis that well they're all you know\ncompressed by you know the single you\nknow property which is well if I do some\nlook ahead if I do some search on how\nwell this go move performs in a couple\nmoves down you know good go moves at the\nproperty that they result in good you\nknow States for me a couple moves down\nright and so there's a there's a\nstructural property about you know good\nactions in the environment which is that\nthey are compressible by knowing that\nthey're good right knowing that you know\nI could do some search and that search\nwould tell me you know what actions are\ngood lets me sort of compress and\nunderstand what's happening with you\nknow good actions environment right so\nso we think about this right like what\nsearch lets you do is it lets you be in\na situation that you've never seen\nbefore a situation where you don't know\nexactly what's happening you haven't\nunderstood the exact mechanics of the\nsituation but as long as you have the\nability to sort of search and understand\nah if I took this action or if I did\nthis thing you know then what would\nhappen in a couple steps then I can have\na ability to do a good job in that\nenvironment because I'm not just sort of\nlearning basic structure you know I'm\nnot learning sort of heuristics about\ntheir particular situation I'm learning\nthis sort of much more General thing\nabout you know what it means to do a\ngood job in general which is you know to\ndo a good job means to find something\nwhich has this property and so I'm just\nsearching for things which have that\nproperty right and so the basic idea is\nwell okay just to start out with you\nknow search is actually a really good\nstrategy for doing a good job in any\nsituation where you have a lot of\npossible uh environments a lot of\npossible you know diverse situations you\ncould find yourself in\nokay and so this is the reason that you\nknow in fact we often will use search\nalgorithms to solve cases where there's\na really large variety of possible\nthings that you know you could encounter\nright there's a reason that when humans\nwrite chess algorithms or go algorithms\nwe write search because you know it's\nit's you know too difficult to be in a\nsituation where you you just have to you\nknow have enough heuristics to cover\nevery possible situation right there are\ntoo many situations too many possible\nthings that you can encounter you need\nto be learning sort of general purpose\nalgorithms that can you know be able to\nwork in any possible situation and\nsearch is sort of one of those sorts of\ngeneral purpose algorithms and so you\nknow because of that we might expect\nright if you know to the extent that our\nmodels are sort of selected to\ngeneralize\num which is to some extent you know the\nreason that we select the particular\ntypes of parameterized functions that we\nselect is because we want them to have\nthe property that you know the simplest\nsort of function under those uh under\nthose conditions will actually\ngeneralize well we'll actually do you\nknow good things in new environments\num well okay search is one of those\nsorts of things to generalize as well so\nwe might expect to find it for that\nreason\nokay so that's number one right is\nsearch is actually really good right you\nknow we should expect you know you're\ntraining them all to do really powerful\nthings and one way to do really good\nstuff is to search and so that's a\nreason to expect search\nokay uh another reason to expect it is\nSimplicity and compression so I was just\ntalking about this previously but we can\nthink about the reason you know the sort\nof the the you know in some sense you\nknow your model is sort of you know\nthey're searching for patterns right\nthat compress the data and if I have a\nbunch of data that describes you know\nsome complex Behavior Uh that is trying\nto do some you know powerful thing in\nsome particular situation a good way of\ncompressing almost any complex behavior\nis while complex behaviors are often\nselected uh to accomplish some goal\nright I you know search through a you\nknow uh you know if I'm like you know\nlooking for you know at good go moves\nyou're right those good go moves were\nselected to win the game of Go and so I\ncan compress the you know the property\nof good go moves into you know things\nwhich in fact have the property they do\na good job uh you know at accomplish and\ngo after I do some look ahead\num you can sort of think about this\nright if I if I could just memorize all\nof the possible paths you know uh you\nknow all the turns I have to make in\nsome particular maze to get to the end\nyou know I have a really long maze\num and I could just memorize all of the\nall the things I have to do just all of\nthat maze or I could just write the you\nknow two-line algorithm that just like\nyou know searches for all the pass\nthrough the Maze and finds the one which\nresults in me getting to the end and\nthat's a substantially simpler algorithm\nright I don't have to do all this\nmemorization of all these possible ways\nuh you know all the sort of turns you\nhave to take in the Maze I can compress\ndown the sort of you know policy of\ntaking all these particular paths into\nyou know the much simpler uh algorithm\nof just you know figure out which path\nworks and then take that path\nquestion\nin order to produce outputs reasonably\nquickly then Chelsea to spend some time\nactually identifying what's the optimal\npath through the search space and the\ncompressibility of that should be a\nfactor as well right\nyeah that's absolutely correct so I\nthink that I think that I am\nis that you're going to find a model\nthat is just doing search the only thing\nit does is has a search algorithm and\neverything else you know there's nothing\nelse that it's doing\num that's not going to happen I think\nthat's that would be really unlikely\num so right so so again we're talking\nabout a situation where we have a model\nand one of the things that model is\ndoing right is it has some complex\nsearch process but certainly it's also\ngoing to have other stuff right it's\nalso going to need some amount of\nheuristics right because search spaces\nare really big uh you know oftentimes it\ncan't just search over the entire space\nso it has to have some amount of\nheuristics and some ways to prune the\nspace but\num if it is doing some search if it has\nsome objective then we're still going to\ncall it a mace Optimizer so we want to\nunderstand the situation where okay why\nwould it be doing any search right why\nwould it even have a search algorithm at\nall versus why would it not\num and so yes absolutely there you know\neven when you do end up with the search\nalgorithm it's still going to be doing\nother stuff as well\num\num but but it still might be concerning\nif that search algorithm is searching\nfor something we don't want\num and we talked last time of course you\nknow in terms of compression rule okay\nso why would you find these sort of\nsimple algorithms we talked about you\nknow a bunch about this previously about\nyou know in fact is the case that these\nsorts of uh you know machine learning\nsystems that we build have the property\nthat they select for these really\nstructurally similar algorithms and so\nto the extent that something like you\nknow search is this very structurally\nsimple algorithm it's just doing you\nknow this basic procedure of you know\nnarrow the space until we find a thing\nwhich you know scores well on some you\nknow objective\num we should expect you know that sort\nof structurally simple algorithm to be\none that is selected for by the you know\nmachine learning processes that we use\nyeah question\nover the past couple of years there have\nbeen sort of attempts I guess to does it\ncreate mace optimizers there I mean\nthere was the RL squared paper from a\ncouple years ago and recently it was\noffered the distillation which I guess\nis explicitly tries to compress like\npretty complex policies or but like uh\nthere have been arguments that neither\nof these two models are actually doing\nMesa optimization and that mace\noptimization is actually not selected\nfor I'm like rather like simpler\nheuristics that also compressed peace\npolicies would be selected for over uh\nthis sort of search that I can see we're\ndescribing just now so yeah do you have\nany comments\nyeah I think I'm not going to try to\nweed too much into the debate on like\nwhether any particular model that people\nhave built does or does not count as a\nmace Optimizer I think one thing that\nI'll say is that well it really depends\non your definition of base Optimizer\nright you know we waffled a bunch at the\nbeginning right like what is an\nOptimizer what is search exactly\num you know I think that you know\ncertainly it seems like a lot of the\nsorts of models that we've built often\nare doing searchy like things you know\nespecially if you think about write\nsomething like alphago well alphago was\nliterally trained uh by distilling a\nsearch process Monte Carlo tree search\ninto a policy model that was attempting\nto mimic the results of the search\nprocess and so like okay probably\nhas to be doing something search-like to\nbe effectively able to mimic a search\nprocess consistently\num exactly what it's doing though you\nknow the problem is we don't know right\nyou know I think a lot of the problem in\ntrying to actually understand exactly\nyou know is does the thing count as the\nmain office or is it actually doing\nsearch you know is it doing some\noptimization as well we don't have very\ngood transparency and so oftentimes it's\nreally difficult for us to know exactly\nwhat algorithm it is that it's\nimplementing and um even when we do have\na good some amount of understanding of\nwhat algorithm is implementing it's\noften really hard to interpret that\nright it's often really hard to actually\nunderstand uh you know does this count\nright as you know authorization\nthe important thing right that we care\nabout at the end of the day is like is\nit the sort of thing that could have an\nobjective that is different than the one\nthat we want and and you know if it does\nwould that be bad right and so that's\nreally what we're going to be focusing\non\num you know right now I think that you\nknow a lot of these semantic questions\nget you know Harry and you can sort of\ndebate them all day but um you know at\nthe end of the day we care about\nsituations where you know it's it's in\nit has some search processes optimizing\nfor something and if that thing was\noptimizing for we're wrong you know we'd\nbe in trouble\nquestion so search seems to be easily\ndescribable so it seems like simple to\ndescribe but maybe is computationally\ncomplex is the modern Paradigm of deep\nlearning going to select for something\nthat's going to be as computationally\ncomplex as an algorithm like search\n[Music]\nreally good question I think the best\nanswer I can give is\num I don't know\num I think it's you know it's often\nunclear you know the extent to which we\ncan uh you know actually find\nsearch-like algorithms in sort of you\nknow these sorts of parameterizations\nthat we're searching over\num I think you know some things I will\nsay I think that\num\nyou know especially as we start getting\nreally really large models that have the\nability to influence you know very\ncomplex multi-stage algorithms certainly\nsearch is possible right you know you\nabsolutely can encode some search\nalgorithms in these sorts of really\nlarge models you know you start by\ncreating some space you do some\niterative refinement you know you select\nsomething out of the space\num are they doing it I don't know um you\nknow like like I was saying I think it's\nreally tricky it depends on exactly how\nyou define these things I think they\ncould be doing it and it and one of the\nthings we're going to be talking about\nis that it seems to me like a lot of the\nsort of biases that are pushing you in\nvarious directions seem like they the\nyou know the biases they will favor\nsearch seem like they're increasing so\nso one of those for example is is\nSimplicity because as we talked about\npreviously these sort of bias towards\nfinding structurally simple functions\nincreases as we have larger and larger\nmodels models that are doing more you\nknow have the ability to implement\nalgorithms that require multiple stages\num you know like search you know the\nbias towards those sorts of structurally\nsimilar algorithms uh you know increases\nand so for that sort of reason that's\none of the reasons you might expect that\nyou know that this will arise as we have\nlarger models\nokay question\nor I guess the way of you described\nsearch makes it seem like it has some\nsort of like recursive structure to it\num so does that mean that you can sort\nof get around the problem of unintended\noptimization by just building uh models\nthat aren't recurrent or don't have like\nlike recursion in them and like\nwould this make it harder for them to\nimplement this sort of like recursive\nsearch\nyeah so it's definitely not the case\nthat you can get around this by sort of\nhaving models that aren't recurrent\nbecause you can still in one search even\nwith a finite depth I I just Implement\nfinite depth search right so I can\ninfluence search out to a particular\nfinite depth uh you know in a model that\nis limited you know via some finite\ndepth right I can't Implement unlimited\nsearch I can't do a search that just\nlike always you know keeps going until\nit finds something that is good enough\nuh because you know in a in something\nthat is you know has finite depth I\ncan't do that\num but but doesn't mean you can't have\nany styles of search doesn't mean you\ncan't have any sales optimization so is\nit the case that recursive architectures\nmight increase the probability that you\nend up with Optimizer style systems you\nknow Mesa optimizers I think the answer\nis it seems like that is possible\num but is it necessary definitely not\nat least in theory I mean it may be in\npractice that you know we only ever find\noptimizes we do recurrent things I think\nthat's pretty unlikely to be true but\nit's at least you know conceivable\nokay\num other reasons that you might you know\nbelieve that your model will uh you know\nyou're gonna end up with a Optimizer\num another reason is well oftentimes we\ntrain on really big data sets of things\nthat humans have done and a property of\nhumans is that we often choose what\nthings we do and what we say and how we\nact to accomplish particular goals\num and so if you want to do a really\ngood job at mimicking humans then in\nsome sense you need to be able to run an\noptimization process to do so because uh\nyou know humans you know often will do\noptimization\nand so there's sort of this basic\nstructural property of it eventually if\nyou want to get good enough at mimicking\nhumans you sort of have to be able to do\nsome amount of optimization\num and one property about this is well\nyou know if you're doing optimization\nfor the purpose of making humans it's\nreally easy to reuse it you know for\nyour own purposes right once you have\nthe authorization algorithm there's lots\nof you know you know you you can reuse\nthat same algorithm for accomplishing\nyou know all sorts of other things as\nwell you know\num and in fact this is like you know if\nwe think about how do humans mimic other\nhumans you know oftentimes the way that\nhumans mimic other humans is by sort of\nyou know imagining you know we were them\nright empathy you know this sort of you\nknow mirror neuron style idea where\nyou're like okay you know if I we're in\nyour situation and then I run my own\noptimization algorithm you know in the\nsituation where you are and I try to\npredict what you would do that way and\nso you know in the same sense we might\nimagine models doing similar things\nwhere you know they can have their own\noptimization that they do for their\npurposes of you know whatever it is\nthey're optimizing for um and they you\nknow then that same authorization is\nused you know to help predict humans and\nit could go either way you know you\ncould start by first developing\nauthorization for the purpose of\npredicting humans and then you know\nrepurposing it for yourself or vice\nversa but the fact that we're trying to\npredict humans means you know it seems\nto incentivize you know we're gonna have\nto have models we're doing some amount\nto bottom station\nand at least if they get good enough of\nthat task\nokay so I talked about this previously\nbut it seems like you know we have you\nknow we have these sorts of lists of\nthings right that are pushing Us in\nvarious different directions we've got\nyou know as we have larger and larger\nmodels that are biased towards\nstructurally simple functions we have an\nincentive towards mace optimization uh\nyou know as we have more you know more\nSimplicity bias from larger models as we\nhave you know stuff like statefulness\nthe ability to like store state or you\nknow be recurrent like we mentioned uh\nyou know uh might incentivize based\noptimization and then we have things\nthat disincentivize space population so\nuh you know one thing might be if I give\nit a bunch of explicit optimization you\nknow it has like Monte Carlo research as\npart of its structure then it's going to\nbe less likely to have to learn its own\noptimization algorithm you know if I\ngive it um time bias so one property of\noptimization that you know that we're\ntalking about as well you know\noftentimes it can take many steps of\niterative refinement and so if I force\nit to be really you know uh you know not\nvery deep right I can't do many steps of\nsequential computation it's going to be\nhard for you to learn a search process\num and you know if I have really simple\ntasks that don't have this property that\nthere's really diverse settings that I\nhave to be able to generalize across\nthen the generalization benefit of\nsearch will be reduced as well and so we\nare in a situation where you know we\nhave some things pushing One Direction\nsomething's pushing the other direction\nand I think that you know the sort of\nGeneral point that I want to make here\nis that it seems like as AI gets better\nwe're doing sort of more of the things\nthan incentivized based optimization and\nfew of the things that disincentifies it\nand why is that well you know it's\nbecause a lot of these things right are\nsort of just basically correlated with\ncapabilities they're just basically\ncorrelated with having big models that\nare capable of learning general purpose\nalgorithms uh that are able to you know\nsolve uh complex environments and so as\nwe build better AI systems we're sort of\npushing pushing research because well\nbecause search is a really good\nalgorithm it solves lots of problems uh\nright you know we're doing less hard\ncode augmentation because we want the\nmodels to be able to optimize themselves\nwe're doing less time flux device\nbecause we have deeper models they can\nlearn you know algorithms that involve\nmore iterated steps we're doing more\nGeneral tasks uh you know we're doing\nsituations we have bigger models with\nmore Simplicity bias uh you know and and\nwe're sort of you know exploring\narchitectures that have the ability to\nyou know do all you know sorts of\nimplement more complex algorithms and so\nyou know it seems like we're moving in a\ndirection where you know um searches are\nincentivized more by you know sort of\nlarger and more modern outcomes\nokay so I think this sort of suggests\nthat at some point we're gonna have to\ndeal with this and we're going to have\nto be in a situation where\num you know we have to sort of encounter\nand deal with a sort of inner alignment\nproblem you know where the we're the\nsort of key question that we have to\nanswer in that case is you know in what\nsituations will you in fact end up with\nmany subjectives that are misaligned\nwith the you know what the thing we're\ntrying to get them to do with the loss\nfunction the word functions whatever\nthey were training on\nokay\nso\num\nwe recall from previously that we're\ngoing to call the situation where we\nhave a sort of Mace Optimizer and it has\nthat objective that is sort of not\nrobust across you know different\nsituations a situation where it's\npseudo-aligned\num and so you know one thing we can\nstart with is just like okay what does\npseudo alignment look like right what\nare the possible situations in which you\ncould have a model that is that is\npseudo-aligned\num\nand so we're going to start with the\nmost basic situation that we're sort of\ngoing to be focusing on the most is\nproxy Sue alignment which is a situation\nwhere in the environment there are\nvarious different sort of proxies that\nthe model could care about that are\ncorrelated with each other\num on the training environment but you\nknow they come apart in in in you know\nsome other environment so this is the\nthis is the example with the green arrow\nwhere going to the end of the Maze and\ngoing to the Green Arrow are correlated\nwith each other in training they're you\nknow highly you know tightly coupled\nproxies but in fact they're different\nand they have different implications if\nwe're in some new environments\nand so the idea is that instead of\nlearning to optimize for the actual\nthings that we wanted we might end up in\na situation where your Mesa Optimizer\ntries to learn uh you know some easier\nto compute proxy\num\nand in that case we sort of have this\nproxycut alignment\num now so this is sort of the main case\nthat we're going to be thinking about\num but it's not the only case so I do\nwant to just sort of mention that there\nare other ways in which you might have a\nmodel which is pseudo-aligned as well so\nanother example that you know is is\nparticularly weird example but it's just\nsort of I think a good example to\nillustrate that the possible other ways\nin which this could happen is\nsub-optimality pseudo-alignment so uh\nsub-optimality student alignment is a\nsituation where the reason that the\nmodel looks aligned in training is\nbecause of some defect in its\noptimization process so you know maybe\nfor example it doesn't know some fact it\njust like is you know uncertain or you\nknow ignorant about something\num and because it's ignorant about that\nthing it does a good job it looks like\nit does a good job but if it ever\nlearned the truth it would do a bad job\nso an example that we we often a sort of\ntoy fun example that we sort of give in\nthe paper about this is let's say you\nhave a model or you have an AI and the\nAI is trying to clean a room uh and it\nwants to you know get you know you're\ntrying to find some algorithm which in\nfact results in you know the policy\nwhich which cleans the room and mainly\none thing the model could be doing is\nit's it's an Optimizer it's trying to\noptimize for something and the thing\nyou're trying to optimize for is it\nwants a world that contains as little\nstuff as possible it wants to destroy\nall matter in the universe and it in it\ncorrectly believes that all of the dust\nthat it vacuums up is obliterated\nand then one day it discovers that you\nknow in fact all of the vacuum dust is\nnot obliterated it just goes in the you\nknow the little bag in the vacuum and it\nhas you know a realization that actually\nit shouldn't be cleaning you know all of\nthis cleaning they was doing was totally\nuseless for its for its true goal right\nand so this is this is a fun example\nit's not it's not actually you know\nrealistic but the point is that there\nwere weird situations where your model\ncan look aligned in training\num for a situation for for reasons that\nare you know can be totally different\nyou know from from what you want right\nit can look like it's doing the right\nthing but not actually be in any sense\nyou know really doing the right thing\num\nand so that's sort of concerning right\nthese are situations where we have some\nstyled inner alignment failure Yeah\nquestion\nwhy is that different from the proxy\nthing why is like destroying the the not\njust the proxy for vacuuming\nuh yeah so why is some optimality\ndifferent from the proxy case so um the\ndifference is that the reason that\ndestroying things and\num you know uh doing am I actually\ncleaning the room are like correlated on\nthe training environment is not because\nthey're actually correlated right it is\nnot in fact the case that if I looked at\nthe actual dynamics of that environment\nthe destroying things and cleaning the\nroom are all correlated right and so\nthey're not proxies for each other right\nif I was looking at a causal graph of\nyou know all the things that interact in\nthat environment these are not two\nthings they would ever interact but\nbecause our model is has a mistaken\nbelief right it thinks that that\nstructure is different from what it\nactually is it thinks they're correlated\nand so because it thinks they're\ncorrelated does a good job right and so\nthat's the difference here is that\num you know in the first case the case\nwe're primarily going to be talking\nabout the proxy case there are actually\ntwo things through correlated in the\nenvironment and it cares about you know\none of them right and then the some\noptionality case they're not actually\ncorrelated but it thinks it thinks they\nare they are that's one case where you\ncould have solved Mallet alignment but\nyeah so that's that's why those two are\ndifferent\nokay great so we're mostly talking about\nproxy but there are other things as well\nall right so that's what pseudo\nalignment looks like and so then the\nquestion we have is why would it be\nincentivized right like why would you be\nin a situation where your training\nprocess is selecting for models you know\nuh for for parameterizations for\nalgorithms that um you know our studio\naligned okay so the most basic reason is\nunidentifiability it is just in fact the\ncase that in a very large complex\nenvironment there's just a lot of things\nthat you could pay attention to right\nthere's a lot of proxies there's a lot\nof sort of variables in that environment\nthere's a lot of sort of you know things\nto look at\num and and many of those things are just\ngoing to be correlated with each other\nright there's going to be complex\ncorrelations uh and interdependencies in\nany complex environment and so you know\nthere's just gonna be a bunch of things\nwhich are very very related you know uh\nit's going to be hard to disentangle all\nof the possible ways of interpreting you\nknow the end of the maze right and all\nof the possible ways of understanding\nyou know what that means\num in any environment because there's\njust many possible functions that you\ncould apply to some environment which\nwould all yield similar results in the\ntraining example which but you know\nwhich might have totally different\nBehavior elsewhere and so you know just\nah priori you know before we even have\nany information about which sorts of uh\nyou know objectives might be selected\nfor by gradient descent there's a lot of\nthem right and there's a lot of them\nactually would yield good performance in\ntraining and so we should sort of expect\nthat well you know it'd be kind of\nunlikely if you got exactly the one that\nwe wanted\nokay so this is the most basic reason\nunidentifiability you often just cannot\ndistinguish uh you know just based on\nsome training data between you know\ndifferent possible objectives\nokay there's other reasons as well so\nyou know some proxies you know some\npossible you know mace objectives\nproxies the model could learn might be\nstructurally better from the perspective\nof the inductive biases from the\nperspective of you know the sorts of\nthings that you know greatness 10 is\nlooking for these sort of structurally\nsimple functions than others\num and so why might that be the case so\none example is that you know some\nproxies might be faster\nit's easier to implement there might be\nthings which the model can more\neffectively you know Implement and run\nso there's sort of a fun example of this\nso we can imagine a situation you know\nwe were talking uh previously about you\nknow well okay we can sort of\nconceptualize something like natural\nselection as you know doing a search\nover humans and in fact you know natural\nsuction you know maybe it's trying to\noptimize for something like you know\ninclusive genetic fitness it wants you\nknow you to pass on your genes into the\nNext Generation\nUm but of course you know that's not\nactually what humans care about you know\nwe care about all these various\ndifferent things you know you know we\ncare about you know stuff like uh you\nknow happiness and love and friendship\nand you know art and sex and food and\nall of these various different things\nright that are not exactly the thing\nthat Evolution wanted right Evolution\nwants us to you know all go to the\nnearest you know sperm or egg bank and\nyou know donate as much as possible and\nyou know have as much of our DNA in the\nNext Generation that's that's not what\neveryone does right and so we can sort\nof you know ask well okay you know why\nis it that you know humans don't have\nthis property that we are you know just\ntrying to maximize you know our DNA in\nthe next you know generation given that\nthat's sort of what we were selected for\nright I mean the answer you know at\nleast one part of the answer is you know\nthe same sort of thing that we're\ntalking about here which is well some\nthings are just easier for people to do\nright and so if you're if you can\nimagine a situation in the alternate\nreality where you know babies you know\nall humans actually just want to\nmaximize their you know genetic fitness\nthey just want to get as much DNA in the\nNext Generation as possible and you have\na baby right and this baby you know what\nthe baby wants is they're trying to\nfigure out how to maximize their DNA and\nthey stub their toe right and they have\nto figure out you know what to do about\nthis right and um it's actually a hard\noptimization problem right you're like\nokay is stopping my toe going to reduce\nmy chances of being able to like you\nknow you know find a mate like well\nmaybe it might reduce my ability to like\ngo run and get food but maybe like if\nI'm you know able to you know stay at\nhome then I can like you know more\neasily go find someone uh you know uh\nyou know to to uh to mate with and so\nyou know it's a complex optimization\nproblem right it requires a bunch of\nunderstanding about things in the world\nit has to do a bunch of you know know a\nbunch of facts about DNA and a bunch of\nfacts about Society\num and it's just hard and it sort of\nlike you know it's very difficult for a\nbaby in this environment to do this and\nso you know Evolution decided not to do\nthis right you know the baby doesn't try\nto run this create this complex\noptimization problem instead the baby\nhas you know a much simpler you know\nobjective which is pain is bad right you\nknow if you if you do something and you\ndiscover that that thing results in pain\nthen you want to not to do that thing\nright uh you know and so it has a much\nsimpler objective of avoiding pain and\nso we can imagine that the same sort of\nthing happens in the situation where\nwe're training models as well where you\nknow there are some sorts of very\ncomplex things that we might want the\nmodel to do and those things might\nrequire some really complex optimization\nprocess that you know is too expensive\nfor the models to run a lot of times and\nso instead it might learn to run you\nknow simpler easier to execute sort of\nproxies like you know pain uh they're\nsort of uh rather than something you\nknow much more complex and difficult to\nexecute like you know how much DNA\nexists in the Next Generation\nokay\num and this is sort of one structural\nreason why some types of proxies some\ntypes of Mace objectives might be\nfavored by the inductive biases by by\nthis sort of like you know uh sort of\nwhat grading descent is selecting for\nrather than just you know selecting for\nthe thing that we want\nokay and another reason is that some\nproxies can be simpler so in addition to\nsome sort of proxies being easier to\ncompute and easier to optimize for some\nare also just easier to specify easier\nfor the model to sort of you know\nspecify in terms of input data so we\nthink about you know again something\nlike\num you know pain uh you know it's easy\nto specify in terms of you know the the\nsignals that you're getting you know\nfrom your nerves right you know you\nconstructed this situation where in\nterms of the actual input data that your\nbrain is receiving it's easy to specify\nthis objective\num you know we can we can again you know\nthink about something like you know why\ndo humans you know end up valuing\nsomething like sex well sex is easy to\nspecify in terms of the input data that\nthe human is receiving whereas like in\nfact you know resulting in having a\nbunch of DNA in the Next Generation\nthough that's correlated with having sex\nis much harder to specify in terms of\nthe input data that humans receiving\nright you have to do a bunch of\ncalculation to figure out okay is this\nyou know is this you know actually going\nto result in me having a child is that\nchild going to you know survive and pass\non my DNA is a much more complex process\nand it's the thing that actually\nrequires you know a lot of specification\nabout what is DNA you know how do\nchildren work you know whereas you know\nlike you know something much more basic\nyou know that is just structured in\nterms of you know what's what you know\nbase input signals am I receiving is\nvery easy to to specify um and so we\nmight expect they're going to be in a\nsituation where you know similar thing\nhappens to models right you you end up\nlearning\num you know objectives which are easy to\nspecify in terms of the data the model\nis receiving\num rather than things that would require\nsome really complex you know computation\nfor the model to you know even be able\nto you know specify right you know\nunderstanding okay exactly what is DNA\nrequires you know specifying a bunch of\nthings about how cells work and you know\nexactly how to determine what the DNA is\nand what counts as mine uh you know\nwhereas you know something like pain is\nvery easy to specify it's like if we\nreceive a particular type of nerve\nsignal that's pain\nand so you know again we might expect\nyou know expect that we're going to be\nselecting four simple proxies proxies\nthat are that are easy to specify\nand again that might not line up with\nthe sort of proxies that we want right\nyou know the sorts of things we actually\nmight want to optimize for may not be\nthe sorts of things that are easy to\nspecify\nokay and so we're in this sort of\nsituation where you know we have again\nsome things which seem like they might\nincentivize pseudo-alignment you know we\nhave situations where you know time bias\neither the more biased that we have\ntowards you know fast models you know\nthat can do things quickly you know it's\ngoing to bias towards proxies that are\nvery fast and quick to run optimization\non uh and you know Simplicity bias is\ngoing to bias you know to the extent\nthat we're biasing towards you know\nstructurally simple functions we're\ngoing to biostroid structurally simple\nobjectives they're easy to specify and\nyou know if we have really complex\nenvironment with lots and lots of\ndifferent proxies then again we're going\nto you know be biased towards you know\nfinding one that is not the one we want\njust because there's so many of them\nokay\num and you know again there are some\nthings we might be able to do to try to\nmitigate this so you know we can imagine\nsomething like adversarial training is a\nsituation where you know we could just\nwell you know in the case of the uh the\nthe maze environment where we have you\nknow going to the end of the Maze and\ngoing to the Green Arrow well those are\ntwo different things that are correlated\nin the particular training environment\nbut if we just train on situations where\nthe the Green Arrow is not at the end of\nthe maze we can distinguish between them\nand so we can try to do this right we\ncan try to just come up with a bunch of\nvarious good training setups where we\ncan you know test to see is the model\ndoing the correct proxy or the incorrect\nproxy we can do transparency right we\ncan try to understand directly what\nproxies is it optimizing for right if we\nthink about something you know\npreviously where we had like the\nalgorithm with the you know looking for\ncars but looking for Windows on top of\nWheels if we know that that's how it's\nlooking for cars then we can tell that\nuh you know when that's going to fail\nright you know if you know if that is\nnot that proxy is not going to work in\nsituations where you know somebody took\nall the wheels off the car right and so\nyou know we can sort of understand with\ntransparency situations where you know\nwe have learned a proxy which is which\nis sort of you know captures what we\nwant situations where it doesn't and\nagain if we give it like explicit\noptimization then maybe we can sort of\nbetter understand exactly what it is\noptimizing for because we can sort of\nsee uh it has you know uh you know this\nsort of uh you know you know preference\nuh you know word function right\nokay so these are some things that we\ncan try to do\num I think you know one thing again\nthat's worth pointing out is well you\nknow it seems like we're we're still\ndoing a lot more of the things on the\nleft right we're going to more and more\ncomplex environments we're having larger\nand larger models to have the ability\nyou know that you have the ability to\nimplement more complex things now that\nthat's sort of a little bit tricky on\nthe time and simplicity right it seems\nlike incentivizes less time bias and\nmore Simplicity bias\num but overall you know it's not clear\nwhether that helps or hurts us\num the complex environments definitely\nseems like it hurts us\num but we can do some of these things\nright we you know to try to mitigate\nthis\nand so for the last bit of this talk\num we're going to shift gears a little\nbit and talk about what might happen in\nthe limit of adversarial training\nbecause I think that a lot of the\nproblems that I've talked about so far\nare problems that are addressable via\nthe basic you know technique of\nadversarial training right you know\nstuff like the Green Arrow versus the\nend of the maze are problems that I can\nin theory address by just getting enough\nenvironments where I can just test is my\nmodel going to do the right thing right\nis it going to go to the Green Arrow or\nis it going to go to the end of the maze\nand so I want to ask is that enough\nright are there any situations where in\nfact there are problems that could arise\neven in the limit of every situation\nthat we can possibly check\num and so that is why we're going to\ntalk about deceptive alignment\nand I will also I will spoil as well\nthis is a really complex topic the\nentire next lecture after this one is\nalso going to be on deceptive lineman so\nthis is right now our sort of Dipping\nour toes into trying to get a\ncomprehension around what what deceptive\nalignment is and how it works but we're\ngonna be talking about this in in great\ndetail next time as well so\num we're we're just sort of starting\nwith an overview right now\nokay so uh I want to take a step back\nand um ask a sort of theoretical\nquestion\num before we sort of get in get into\nthis which which is well what is the\noptimal strategy\nfor a pseudo-aligned Mesa Optimizer that\nwants to you know is some let's say it's\nin some RL environment and it wants to\nyou know accomplish some particular\nthing in that oral environment\num and let's say he wants to do that\nthing as much as it possibly can\nso let's say for example you know it\nwants to go to the Green Arrow and wants\nto go to Green Arrow as many times as it\npossibly can go to the Green Arrow\nthat's like that's only one possible\nobjective the model could have it's one\nthing that it could be doing it's not\nnecessarily the only thing that could be\ndoing but I want to imagine a situation\nwhere you know we have a model and\nthat's what it's trying to optimize for\nit's trying to look for a policy which\nresults in it goes to the Green Arrow as\nmuch as it can\nokay\nso so what should it do well\num here are two possible policies\num that it could implement so policy\nnumber one is see a green arrow go to\nGreen Arrow this is the you know the\nmost straightforward policy\num and we can ask well how well does\nthis do how many green arrows does this\nget and the problem with this is that if\nI'm doing adversarial training right we\nwant to understand what happens in the\nlimit of azure cell training if I'm\ndoing adversarial training where I'm\ntrying to see if my model goes to Green\nArrow or it goes to the end I really\ndon't want it to pick up these random\ngreen arrows if it goes to the Green\nArrow\nthen the gradient descent process will\nmodify the model so it doesn't go into a\ngreen arrow again uh that's that's you\nknow that's what machine learning does\nright when we see a situation where we\nhave an algorithm and that algorithm\ndoesn't fit our data we keep searching\nin our space until we find an algorithm\nthat does fit our data right and so if\nwe're in a situation where the model\ngoes to Green Arrow and we see it then\nwe can train it away and now it won't go\nto the Green Arrow again and so how many\ngreen arrows does it get in that\nsituation well you know approximately\none right it gets to go to the Green\nArrow once and then grain descent\nmodifies it and it no longer goes the\nGreen Arrow and you know it got one\ngreen arrow\nokay that's that's one possible policy\nhere's another strategy that this model\ncould pursue here's this is what it does\nin training\nit goes to the end of the maze\nuh because it doesn't want to be\nmodified to go to the Green Arrow\nand you know again you know it's worth\ntalking about okay but like but it's\nreally trying to go the Green Arrow\nright shouldn't grain descent modify it\nto want to go to the end of the maze ah\nno right that's the does the right thing\nabstraction right what grainy Ascent\ndoes the actual structural thing the\ngradient descent does is it checks you\nknow is this algorithm an algorithm\nwhich has good performance on this data\nand what can we change about this\nalgorithm because it have better\nperformance on this particular data\nright it doesn't care about whether in\nfact the model will eventually have good\nperformance on some other data points\nyou know that aren't that don't exist in\ntraining and it doesn't care about\nwhether the model is actually trying to\ndo the thing that it's looking for just\ncares behaviorally does it look like\nit's doing the right thing and if the\nmodel is going to the end of the maze\nsuccessfully the thing that we want in\ntraining the gradient descent is totally\nfine with that\nand so a thing that it can do right is\ngo to the end of the maze the thing that\nwe want in training and then in\ndeployment you know when it no longer\nhas a grading descent process watching\nover it it instead goes to the Green\nArrow it does the thing that we don't\nwant in deployment and we'll imagine for\nthe purposes of right now that it can\nalways tell whether it's in training a\ndeployment but that we you know gradient\ndescent you know we know have no\nexamples of situations that look like\ndeployment that we can actually give you\nknow in training so we can always tell\nyou know what the difference is well\nwe'll talk later about like you know why\nthat might or may not be the case\num\nokay so this is you know a very\nhypothetical you know we've just\nimagined a situation where we have a\nmodel and you know they're trying to do\nsome you know crazy thing you know which\nis go to the Green Arrow as much as\npossible\num but it is worth pointing out that in\ntheory the thing that that model would\nwant to do the sort of most optimal\nstrategy for it is one that looks pretty\nbad it's one that looks like a situation\nwhere it tricks the training process\nright it does it tricks us you know in\ntraining it does the wrong it does the\nthing that we want in training and then\nin deployment it does a totally\ndifferent thing which is the thing that\nit actually want to do right and in this\nsituation right it gets way more green\narrows because if deployment is like you\nknow in this case you know three steps\nlong it gets three green arrows you know\nbecause they keep going to the Green\nArrow and deployment because there's no\nyou know Green descent process watching\nover it\num and so this is potentially concerned\nyeah questions\nso it occurs to me that you've got this\nuh maze example so if the agent is\ntrying not to go to the Green Arrow\nbecause it will be gradients descent\nselected away from this that means in\nsome level the model has to know what\nthe wheel objective is there at least\nthe model the objective is not the Green\nArrow because if they thought the\nobjective really was the Green Arrow it\nwould model but knowing there would\nactually cause it to be reinforced right\nthat's exactly correct and we'll talk in\na little bit about like what are the\nconditions that need to be true from\nmodel to sort of exhibit this behavior\nand that's one of them right for a model\nto exhibit this Behavior it has to know\nin some sense what the gradient descent\nprocess what the machine learning\nprocess is actually trying to get it to\ndo um if it doesn't know that then then\nyou you won't ever see this but right\nnow we're just looking at it in theory\nwe're just like okay in theory if the\nmodel knew all the relevant facts you\nknow what would be the best thing for it\nto do\nquestion\nso do the incentives of the agent depend\non their attack because if I think about\nlike the r ribbons aren't familiar with\nit seems like the way they work is that\nwhen the agent does something that leads\nto a gamer of all that gets reinforced\nand so it does that more often and so it\nseems like that would be an incentive\nagainst it like going to the end of the\nmaze because then it would be\nif I do that then they go to the end of\nthe maze thing we reinforced and I'll be\nmore likely to do that in deployment\nwhich I know on\nyes that's a really good point so it is\ntotally the case that grand descent will\nreinforce right the thing that caused it\nto go to the end of the maze but the\nthing that's so pernicious in this case\nis that the thing that caused the model\nto go to the end of the maze in training\nwas the fact that it wanted to go to the\ngreen arrow in deployment right its\ndesire to eventually go to the Green\nArrow was the reason that it actually\ndid a good job in training and so if\ngrain descent is just reinforcing\nwhatever caused it to have good behavior\nand training in this case the thing that\ncaused to have good behavior in training\nwas the fact that eventually it wanted\nto trick you right that was the reason\nthat it had good behavior and so in this\ncase you know the thing that should sort\nof get reinforced is just the same you\nknow exact model right here's another\nway of thinking about this right what\ngreatness is doing is it's just looking\nat you know if I were to change the\nparameters right in what direction you\nknow would I get better performance but\nif your model already has Optimal\nPerformance if it's already sort of you\nknow always does the right thing in\ntraining then the ingredient descent has\nnothing to say right you know grain\ndescent is not going to say ah you know\nthis model has Optimal Performance you\nknow but there are these other examples\nthese other training examples you know\nthese deployment examples that I've\nnever seen before that it would have\nbetter performance on if I changed it\ngreen descent doesn't know that right\nyou can't see these deployment examples\nbecause there doesn't know that the\nmodel would theoretically have bad\nperformance there right and so it's not\nselecting over that at all now there is\none thing that it's still selecting over\neven at Optimal Performance which is\nwell there's there's some inductive bias\nright there's still a Simplicity bias\nright and so we'll talk about that a\nbunch uh you know later about but but\nyes there is still a thing that is\nhappening even in that situation which\nis just like well you know one thing\nthat could happen is maybe you know the\nthe sort of the deceptive policy here is\njust like too complex it's not you know\nthe sort of structurally simple\nalgorithm and so even though it has good\nperformance you might sort of just be\npushed by that the inductive biases you\nknow in we saw like with rocking where\neventually finds the correct algorithm\nit might be pushed for that reason into\nfinding the correct thing but it\nwouldn't be pushed just for the purpose\nof having\num good performance because it has\nperfect performance you know it's\nalready doing the right thing in\ntraining okay we'll talk a bunch more\nabout this later\nokay so let's go through a little bit\nmore about you know what What's\nHappening Here how might this arise you\nknow what are the basic factors and\nagain we'll be talking about this a\nbunch more next time as well\nokay so how would a deceptive alignment\nactually happen\nso we're going to distinguish between\ntwo different ways in which a Mesa\nOptimizer could learn the base objective\nyou know the objective of the\num\nthe you know the the\num the greatest end process right we\nidentified previously one of the things\nI said previously was that knowing what\nthe objective is that the base Optimizer\nis trying to get it to do is sort of\ncritical for having a deceptively\naligned model\nand so the first thing we can ask it's\nalso you know another thing that's\npointing out worth pointing out it's\nalso critical having a line model right\nlike any model that's going to do the\nright thing needs to know what the right\nthing is and so at some point it has to\nhave learned what the thing is that\nyou're trying to get to do that you want\nto want it to do okay so here are two\nmechanisms via which a model could learn\nwhat the right thing is what the thing\nis you're trying to get to do\nmechanism number one is internalization\nthis is a situation where you know\nyou've got some may subjective it lives\nin the model and over you know many you\nknow course many you know optimization\nsteps of grading descent it slowly\nrefines that objective you can tell that\nobjective directly corresponds to the\nthing that you know that we want that\nactually does a good job in all examples\nthat we can test in the relaxer with you\nknow with doing average cell training\nokay that's option number one\noption number two is what we're going to\ncall modeling which as well one thing\nthat generally happens so I have a uh\nyou know model you know I haven't a\nparameterized function I have you know\nalgorithm in some situation and has to\nbe able to do a good job in that\nsituation is it generally we'll learn\nfacts about the world because one thing\nthat is true is that just knowing more\nstuff about the world is usually just\nconvergently helpful for doing a job in\nbasically any environment and so instead\nof you know grain descent modifying the\nmodels like May subjective until it gets\nbetter and better we could imagine a\nsituation where the green descent is\njust modifying the model's knowledge\nabout the world and it keeps modifying\nthe model's knowledge about the world\nuntil it gets to the point where\num that knowledge about the world\ncontains information about the base\nobjective right like one thing that is\ntrue about the world is that there\nexists some optimization process where\nthe model is being optimized over and\nit's optimizing for some you know loss\nfunction or whatever and so we could\nimagine all you know hypothesis too is\nthat the way that it learns about the\nthe base objective is through learning\nmore and more facts about the world\nquestion\nso if the graded descent what a strength\nor gradient descent to adjust the model\nis basically constant no matter how\nstrong the model is but smarter models\nbecome better at modeling does that mean\nthat number two is actually going to\nbecome more and more common as models\nbecome more capable relative to the\nfirst one like it will become better at\nmoment but won't actually get any better\nat internalization\nso I am going to advance that hypothesis\nthat modeling is going to be more likely\nthan internalization\num yes uh so let me yeah let me talk\nabout some of the reasons why I I think\nmodel modeling might be might be that\ninternalization I think I basically\nagree with what you were saying\num\nso uh before I do that though I just\nwant to distinguish between two ways in\nwhich modeling can can happen right so\nan important thing right about the\nmodeling is right so let's say so so it\nkeeps learning more facts about the\nworld right as it's um you know\nunderstanding of the world is modified\num but for for that understanding of the\nbase objective in its understanding of\nthe world to actually result in the\nmodel having good performance in\ntraining there's additional step which\nis needed right you it's not enough to\njust know what the base objective is you\nhave to also care about it right it has\nto want to optimize for it for some\nreason and so there are two reasons I'm\ngoing to identify that you could have a\nmodel that sort of knows what the base\nobjective is and wants to do what the\nbase objective wants it to do right so\nreason number one is deception right\nreason number one is if the model has\nsome random you know it has some uh May\nsubjective and that may subjective you\nknow cares about something happening in\ndeployment it has a sort of long-term\nbase objective that you know cares about\nsomething happening later\nthen for the purposes of sticking around\nand training it'll want to do what the\nbase objective wants and so if modeling\nhappens and the model understands the\nbase objective via understanding the\nworld then one way in which that\nunderstanding could result in good\ntraining performance is via deception\nokay but there's a second one which is\nwe're going to call courage ability the\nsort of cordial alignment case where the\nidea here is\nyou could instead have a situation where\nthe model learns you know what the base\nobjective wants to do just be\nunderstanding the world and then it's\nmace objective is just sort of thrown\nout and instead it's made subjective is\nreplaced with sort of a pointer to that\nunderstanding in the world right now\nthat it understands that green descent\ncan just sort of take the model and be\nlike okay this old main subjective that\nwe have sitting around here the\ncorrespondences some proxies will just\nget rid of those proxies now that you've\nlearned what the true base objective is\njust do that right point to it in some\nsense but this requires encoding some\nsort of pointer to you know an\nunderstanding of how to specify the you\nknow the what what what knowledge in the\nworld actually is the base objective\nYeah question\nuh first one to submit for people with\nno background could you do a brief\ndescription of what courage ability\nmeans\nyeah that's a really good question so\nI'm I'm not exactly using courage\nability in the sort of full technical\nsense here\num if you're not familiar with the word\nit just sort of refers to a situation\nwhere\num you have one sort of agent that is\nsort of you know willing to help and uh\nyou know act in the ways of another\nagent is willing to listen to feedback\nand you know change as it corresponds to\nwhat that other agent wants to do so in\nthe sort of case the reason we're sort\nof calling this a quadrival case is the\nsituation where if you had a model that\nhad this sort of property where like it\nsort of was trying to figure out what it\nwas that the base objective wanted to do\nthen if Yusuf came to it and you sort of\ntold it that actually the basic director\nwas doing something different you were\nlike you know actually what I really\nwant you to do is this other thing then\nit would sort of listen and it would do\nthat and so it's sort of amenable to\nfeedback in that sense and so that's\nsort of why we're going to call it\ncordial here\none thing that is worth pointing out is\nthat in some sense the deceptive model\nis also amenable to feedback because I\ncan go to the deceptive model and can be\nlike hey deceptive model you know what I\nactually want you to do is you know this\nother thing and the deceptive model as\nlong as you're still in training we'll\nbe like oh you know the best way for me\nto like you know you know get into\ndeployment with my current objective is\nto do this new thing you're telling me\nto do and so the deceptive model is also\nmeant all the feedback but\nobviously not in the way that we want\nbecause eventually it is no longer a\nmental to feedback you know once it is\nin a situation where it's in deployment\nyou know and it's trying to do it's it's\nyou know Green Arrow thing or whatever\nit actually wants then it's not a mental\nfeedback anymore\nokay question again\nwell let's talk about message optimizes\nfor a while now what I'm wondering is is\nthis form of decepting a chargeable\nalignment does that require a mess\nOptimizer for this problem to occur that\noccur for just an outer model as well\nyeah so it's a bit of a complicated\nquestion I think that I'm sort of mostly\nconcerned about situations where like\nthe model is deceptive and it's also\nlike you know very capable of\ndistribution right I think you could\nimagine a situation where it's not\nreally well described as an Optimizer it\ndoes isn't doing some sort of capable\noptimization process\num but it's still sort of deceptive I\nthink\num most of the situations I imagine\nwhere a model is like in fact doing\nsomething really capable and concerning\nare situations where it is doing some\nsort of optimization\num I don't think like doing some sort of\noptimization is really not that high a\nbar you know\num but it does sort of depend really on\nexactly how you're conceptualizing and\nthinking about what sort of counts as a\nmace Optimizer\num you you could imagine a situation\nwhere sort of anonymous Optimizer is\nstill deceptive we're definitely not\ngoing to be focusing on that at least\nright now\num we'll maybe sort of talk take a\nlittle bit of a more broad stance next\ntime but we're still mostly going to be\nfocusing on situations where a while\nokay for it to be deceptive it has to be\ncapable of doing the reasoning you know\nto accomplish goal X I have to you know\npretend to do goal why and to be able to\ncapable to be capable of that step of\nreasoning sort of seems to require some\namount of like thinking through the\nconsequences of actions that it sort of\nseems to require some amounts of\noptimization\nso\num I like to sort of take some really\nstrong stance but it seems to me like\nmost of the sort of at least the\nconcerning cases seem to seem to require\nsome amount of optimization um it's\nworth pointing out also they require\nmore things than that right like it's\nnot the case that any mace Optimizer\nwould be deceptive because it also like\nlike I mentioned it has to care about\nyou know something in the long term\nright it has to care about something in\ndeployment right if he doesn't care\nabout anything happening in deployment\nthen then it has no reason to be\ndeceptive um well yeah one of the other\nthings we'll talk about next time is\nlike you know how could that develop you\nknow that and why would it develop\num we'll talk a little about it here as\nwell\nokay so I said I was gonna make some\nsort of arguments for why I think you\nknow modeling would happen rather than\ninternalization\num I think the most basic argument is\njust\num you know\nwhen you give your model a bunch of\ninformation about the world and training\nright it's seen like you know all of the\ninternet uh it's uh you should expect\nthat any behavior that your model has is\ngonna make use of that information right\nyou know it just sort of is a waste of\ncomplexity and you know space in the\nmodel to not have the behaviors of the\nmodel sort of specified in terms of the\nknowledge that the model has about the\nworld and so you know it's read the\nentire Wikipedia page on ethics it has\nto be able to summarize that page anyway\nbecause that was sort of one of the\ntasks that we wanted to be able to do\nwell uh it's probably going to use that\nexisting faculty of knowing what the\nWikipedia Wikipedia page on ethics\ncontains to help it understand how to\nyou know do a good job uh you know on\nhuman values or whatever it is that\nyou're trying to get it to do\num and so you know in for that basic\nreason I think we should expect modeling\nyou know it's just substantially similar\nright it doesn't require this you know\nlearning some some extra thing\num and then again we can sort of make a\nsimilar case for why we might expect\ndeception over the sort of cordial case\nthough this is I think a little bit\ntrickier\num but you know we can sort of Imagine a\nsituation\num where we have a\num\nwe sort of have this corrigible um you\nknow align model and it has some you\nknow notion of what it means to sort of\nspecify uh you know and point to you\nknow something in the environment that\ncorresponds to what it wants but there's\nsort of a structural problem which is\nwell pointing can actually be really\ndifficult it can be really hard so an\nexample of you know of sort of a thing\nthat might be well described as a\ncordial model in this case would be\nsomething like a duck so Ducks sort of\nhave this imprinting Behavior right\nwhere when they're born you know they\ntry to figure out who their mother is\nyou know by like looking for what things\nare nearby where they're born right and\nthen they they imprint on that and\nthat's their mother and that's what they\ncare about right and this works pretty\nwell in situations where they are in\nfact born from a mother duck but in\nsituations where they're you know\nincubated by a human uh it doesn't work\nso well right and so actually having an\nability to understand which part of the\nworld you know is the thing that I\nshould be caring about is the thing that\nI should you know have my objective\ncorrespond to can be a really complex\nand difficult structural process\num and so it might you know that sort of\nis is a difficult you know a thing that\nmight make courage ability more\ndifficult the thing about deception is\nit doesn't quite have to do that because\nit just sort of infers it right it's\nlike well I want to do whatever it is\nthat in fact has the result that it\nmeans I do a good job in in training and\nand you know I'm eventually able to do\nwhat I want in deployment and so you\nknow in that case it'll just infer you\nknow in this case does it make sense to\nyou know do the duck or the human uh you\nknow which one will in fact help me the\nmost at accomplishing my goal and so it\ninfers the correct thing to do\num rather than having to sort of have\nyou know some correct way to specify\nwhich thing in the world it wants um you\nknow in the model and so this is another\nthing we're going to talk about more\nnext time I think this is a little bit\ntricky\num especially because it sort of relies\non the distinction between things like\nyou know speed complexity and time\ncomplex speaking vaccines and you know\ndescription complexity\num because the deceptive model still has\nto sort of do the computation to figure\nout what the pointer is but it doesn't\nhave to specify it uh in its in in sort\nof its basic structure Yeah question\nand deliver the techniques like from\nwhere we actually might be able to edit\nfacts or knowledge in the model before\nthinking about this not literally as a\npointer but like some kind of direct\nlighting mechanism would we expect\ncourageable align models to be able to\nto make it easier to change them or\nchange their objective because we just\nhad from the surgery on The Wire\nI mean I think that's a really\ninteresting hypothesis it seems\nplausible to me um I don't know\num but it seems quite plausible yeah\nokay uh great so just sort of recapping\nwhat we sort of talked about our basic\nstory for how deceptive alignment might\narise is that we start with you know\nsome misaligned pseudo-aligned mace\nOptimizer it has some proxy uh and then\nthat proxy uh you know maybe becomes\nlong term and it gets it gets modified\nyou know by the green descent process\ninto using that you know optimizing for\nthat proxy over the long term for uh and\nuh that results in it you know wanting\nto do the correct thing that we actually\nwant in the short term because uh if it\nwants to really do a good job in the\nlong term the best way to do that is\nsort of play along the short term and so\nif if we sort of end up with you know we\nstart with this with it has some proxy\nthat proxy becomes long term and then\nyou know once it becomes long-term it\nstarts you know trying to\num you know optimize for uh you know\nwhat we want for the purpose of staying\naround and one thing right that's worth\nputting out here is the gradient descent\nsort of wants to give the model better\nperformance and to the extent that\ndeceptive aligned models have good\nperformance one way to give the model\nbetter performance is to cause it to\nhave that long-term objective that\nresults in it being deceptive and so\ngreen descent is sort of potentially\ntrying to make the model deceptive here\nand then once the model has some you\nknow deceptive you know the reason that\nit has good performance and training is\nbecause it's trying to do you know the\nwrong thing because it is like you know\nit's trying to do the thing you want for\nthe purpose eventually doing some other\nthing once that happens it doesn't\nmatter at all after that point what\nthing it is the model is actually trying\nto do like whatever the mace objective\nis is completely relevant to training\nperformance because regardless of\nwhatever it is it's still going to want\nyou know as long as it's some long-term\nobjective it's still going to want to\nyou know play or play along in training\nand so we we can say it sort of\ncrystallizes at that point there's sort\nof no green descent updates which sort\nof matter to it\num and then at that point you know you\nknow Green descent is sort of no longer\ngoing to be improving the models May\nsubjective because it doesn't matter for\nthe purpose of performance in that point\nthe only thing that matters for\nperformance is how good the model is at\nbeing deceptive and so we should expect\nthat you know at this point green\ndescent is sort of just trying to make\nthe model better at being deceptive you\nknow once we have this depth of model\nbecause that's the only thing that's\nrelevant to its performance at that\npoint\nokay uh and so one final thing that I\nwant to point out I said I would talk\nabout this again uh is that well okay so\nthere's one other piece that's necessary\nhere which is your deceptively aligned\nmodel has to be able to tell whether\nit's in training or deployment right if\nit can't tell then it can't execute the\nstrategy\nexcept that's not entirely true because\nuh you know the first thing that's worth\npointing out is that it can gamble right\nit can be in a situation where it's like\nokay I don't know whether I'm going to\nbe training or deployment but\num I can you know hope that I sort of do\nthe right thing with some probability\nand I do the wrong thing you know the\nthing I actually want with some\nprobability such that it's likely that\nI'll be able to get you know multiple\nsteps of the thing I uh you know I I\nsort of want to actually be able to do\nin deployment eventually if the\ndeployment environment especially is\nlarger than the training environment\nthen it can just sort of defect you know\ndo the do the thing it really wants to\ndo with some small probability such that\nit's unlikely for that to show up in\ntraining but yet it still gets to do it\nmultiple times in deployment\nokay\num but in fact it's also even worse than\nthis I think this is sort of the basic\ncase that's like okay even if it can't\ntell if you're it's this is still\npotentially concerning but but also\nprobably can tell so why is that well\nmost of the time when we sort of are\nconcerned about situations where\ntraining and deployment differ\num we're sort of imagining that you know\nmost of the small changes don't matter\nright if I train a model in simulation\nand I deploy it in the real world we're\nlike okay you know as long as simulation\nis pretty close we should be able to get\ntransfer but there's you know if your\nmodel is actively trying to look for\ndifferences between the training and\ndeployment distribution then you're\nreally in trouble because then any\nsituation where anything that the model\ntells can tell is different about those\ntwo situations becomes something that is\nsort of you know that it's looking for\nand one other thing that I should point\nout here right is that I say training a\ndeployment but the really important\nthing that the model cares about is the\ndistinction between a situation where if\nit does the wrong thing like the thing\nyou don't want it'll be punished and it\nwon't be able to do it again and a\nsituation where that's not the case\nwhere it can do the thing it wants to do\nand it'll be able to get away with it\nright and so even if we're not in some\nstandard training appointment setup\nwhere we're trying to do some maybe\nonline learning try to always keep you\nknow some great descent process involved\nat some point if we ever deploy our\nmodels in situations where the oversight\nisn't sufficient to like you know you\nknow come in and stop it from doing the\nthing we don't want then this this sort\nof happens and so if we might have you\nknow that's sort of what the deceptive\nmodel is looking for so the concern is\nthat as long as we are in a situation\nwe're eventually going to have to deploy\nmodels in cases where we actually have\nto trust their output then we sort of\nhave to to deal you know with with the\nproblem eventually being in this\nsituation and so then the question is\nokay are there any other things that\nyour model could be looking for that\nwould help it determine whether it is in\na situation where there is no longer\nsort of efficient oversight to be able\nto prevent it from doing the you know\nthe the sort of thing we don't want to\ndo\num yeah question\nuh could you elaborate I've been on one\nof those last points which is like the\nwhole point was that\nthe agent does not want to say go to the\nGreen Arrow because then it will be\ngrieved or sent it away if it is always\nbeing greatly descended including in\ndeployment could you elaborate more on\nwhy you expect this to still not work\nyeah good question so I think that like\nhere's an example uh let's say\neventually the model realizes that\nthere's something they can do that turns\noff the gradient descent process right\nand it's been a really really long time\nyou know where the green descent process\nhas sort of been working but then\nsuddenly if you end up in a situation\nwhere now the model has discovered a way\nto turn it off then it stops working\nright and so what the model is looking\nfor is a situation where it can you know\ndo something that subverts that process\nmaybe because you know it could be\nbecause there's always a great\nunderstand process and it finds some way\nto maybe get rid of it or turn it off or\nsomething or it could be a situation\nwhere you know maybe we just deploy it\nin a setting where there's not a green\ndescent process right like if you give\neveryone you know chat app you know on\ntheir phone everyone has the ability to\ninteract with it well you can't always\nmaybe have a green descent process maybe\neventually you know if the model starts\nreally doing bad things all the time you\nknow we're really concerned about it\num and so you know the situation is\nbasically just any situation where you\nknow if the model did something that\nwould be really bad you know we wouldn't\nbe able to correct it from it and as\nlong as we're you know event you're\ngoing to be deploying these models in\nmore and more situations I think we're\nsort of eventually going to have to deal\nwith that problem\nokay okay and so again so it is worth\npointing out that there are there are\nother things that your model could be\nlooking for other than just uh you know\num uh you know these sort of basic\ndifferences right it can it can look for\na lot of various different things\nbecause it's actively trying to find\nthese differences so here are some\nexamples of things that could be looking\nfor you know that might tell it that now\nit's in deployment you know now it's in\nsome new situation where it wasn't like\nyou know being trained so for example\nyou know I talked about similar you know\ntraining and simulation at the point in\nthe real world\num another thing you know is it just\nlooks at the date right it just looks\nafter the date right you know you're\nlike okay for most normal models you\nknow it shouldn't matter very much you\nknow what the date is right it shouldn't\nbe caring about that but for a model\nthat is deceptive it's actively trying\nto look for any difference you know you\nknow even some even something sort of\nsimple and innocuous that this can tell\nit that ah now it's probably not just in\nsimulation now you know the date says it\nmust be you know in the future you know\nin some new situation where it's being\ndeployed but of course this way you can\neasily fool and so then the question is\nare there any things that the model\ncould look for that if we did a really\nreally good job of trying to make it so\nthere was no perceptible difference\nbetween training and deployment is there\nanything that the model could tell that\nthey would sort of give it give it away\nand the answer is yes so this is the\nsort of classic example of this it um\nthis is a very large\num uh it's not a prime number but it's\nthe product of two very large Primes and\num\nthe it's sort of not is not publicly\nknown what the two you know large prime\nfactors are of this number but there's a\nchallenge where somebody gets a bunch of\nmoney if they're able to figure out what\nthe prime factors are this number\nand the problem is that currently we do\nnot have the computational resources\nto be able to factor this number it's\ntoo big it's too hard\nand so the question is well okay\nyou know what if I'm an deceptive model\nand I do something very devious I look\nfor a factorization of RSA 2048 of this\nnumber I look for this numbers\nfactorization and if I see the\nfactorization\nthen I you know go to the Green Arrow I\ndo my devious little thing and if I\ndon't see the factorization I play along\nlike a good little AI\nand so then what happens right what\nhappens if I have a model doing this\nwell the problem with this is that\nthere's there's two things happening\nhere one is that there's no we cannot\ncomputationally construct inputs that we\ncould ever possibly feed to a model that\nhas this property that would cause it\nfor us to see the deceptive Behavior\nbecause we will not be able to do that\nuntil bigger computers come around right\nand so we just computationally can't\ntest it effectively that's probably\nthat's the first thing that happens and\nthen the second property of this example\nis that RSA 24a is eventually going to\nbe factored right eventually Somebody\nalmost surely is going to factor this\nnumber right as computers get bigger\nsomebody's eventually going to have\nenough compute and they're going to\nthrow it at this and do it and so very\nvery likely you have a deceptive model\nthat knows for a fact that eventually it\nwill observe a factorization of RCA 2048\nand also knows for a fact that there is\nno conceivable way that during training\nyou could ever present it with such a\nfactorization\nand so this is an example of a case\nwhere there is a property of the world\nthat is structurally easy for the model\nto check whether that property holds but\nvery very difficult for us to construct\nan example that has that property\num and so it's sort of impossible for us\nto ever generate an input that causes\nthe model to do something bad in\ntraining\nand I want to point out you know this is\na very particular you know sort of\ncryptographic example but I think this\nis emblematic of a much sort of more\ngeneral you know uh fact which is that\num there it is generally true about the\nworld that there are properties which\nyou can evaluate on the world which are\neasy to evaluate but hard to generate\ninputs which have that property right\nthis is a basic struck you know\nstructural fact related to things like P\nversus NP where there are certain\nproblems you know which are just easier\nto generate examples than they are\nactually or easier to verify than they\nare to actually generate examples that\nhave that property and so as long as\nthere exists properties on the world\nthat are easier for models to check than\nthey are for us to ever generate\nexamples then we will always be in a\nsituation where our model is going to\nknow more information and be able to\ndetermine more information about where\nit is than we will ever be able to sort\nof trick it into believing that question\nso again kind of going into like\nmechanistically going into the model it\nalso needs to somehow represent that has\nto remember the fact that it has to go\nlook for this thing out of distribution\nso if we had a way to officially search\nfor the encoding of that little piece of\nmemory to go look for that thing then\nthat would be a potential way to\nmechanistically catch the model before\nit's a\nI think that's a really good point and\nin fact I totally agree that's like a\nthing that we we can try here that might\nhelp so we're talking here about the\nlimit of adversarial training you can\nimagine that even The Limited\nadversarial training there are still\nthings you know where adversible\ntraining isn't helpful but maybe stuff\nlike transparency like you're saying\nwould be helpful for addressing this\nproblem I think that's absolutely\ncorrect one thing I will say though is\nthat it's not necessarily the case that\nthe model has to have like some explicit\ncircuit which checks RC 2048 so an\nexample which I'll sometimes give here\nis we can imagine a situation where like\nI'm training a model and I'm trying to\nget this you know I'm training like a\nrobot to sweep a factory floor and then\nsuddenly the robot finds itself in the\njungle right it's not like the model you\nknow the robot has like where all of its\nlife been like running some check am I\nin a you know in a jungle or am I in a\nyou know a factory it's just sort of\ntrying to understand you know okay given\nthat I'm currently in a factory the best\nstrategy for me is to you know sweep\nthis Factory because eventually I might\nbe somewhere else and I'll be able to do\nsome other thing\nbut then suddenly it gets some piece of\ninformation that it wasn't explicitly\nexpecting but that you know is\noverwhelmingly convincing that it's now\nin a jungle then you know it's going to\nstart doing some crazy stuff you know\nbased on whatever you know it would do\nin the in the new Jungle environment so\nthe point is that your model isn't\nnecessarily like explicitly looking for\na factorization of RC 2048 uh you know\nall all it might be doing is just like\nlooking at the information available to\nit trying to integrate and incorporate\nthat information to build new good\nbeliefs about what is happening in the\nworld and it's just in fact the case\nthat RC 2048 is an example of a really\nreally convincing thing that would\nconvince someone if they saw it that\nthey must be in the real in the real\nworld or not in you know in training\nright and so there might be many other\nthings that might just be in general\ndoing this process of look at\ninformation in the world integrate that\ninformation try to form beliefs and then\nif it you know discovers that in fact\nthe the result of that integration is\nthat it's very likely to be you know no\nlonger in training and then it does\nsomething different right and so it need\nnot be explicitly looking for that\nfactorization\num it might be you know there totally\ncould be a model that just like has some\nexplicit circuit that's just like\ndirectly looking for you know some\nparticular thing to you know test to see\nwhether you should start you know doing\nsomething bad\num but it's not necessary right you can\nabsolutely have models with this\nproperty that um you know that aren't\naren't explicitly doing that so in that\ncase you might still be able to use\ntransparency to help you know you could\nstill try to look for you know this\nintegration you know how it's processing\ninformation and the fact that it tries\nto figure out whether it's in deployment\nor not based on you know that\ninformation processing\num but but you couldn't necessarily\nliterally explicitly look for a search a\ncircuit that was trying to check uh you\nknow for a factorization of a number\nokay and\num yeah so that is the end of Talk\nnumber two uh like I promised we're\ngonna be talking even more about\ndeception next time and really sort of\ntrying to go into detail on How likely\nit is\num but yeah that's that's it for right\nnow if you're interested in a bunch of\nthings I talked about also you can go\nread the risks from learning\noptimization paper this was based on\num but yeah so if there's any questions\nat the end we can do that now\n[Applause]\nokay yes question\nuh\nso I wanted to ask about middle\noptimization for the case of\nWhen Future GPT version about the\nunCloud Turing case\nit's some kind of alaytip or in fact\nGPT thing\num\ntenants occur there How likely is that\n[Music]\nother some candidates more potential\ngive me the objectives that are\nnot quite the right thing we want\nyes that's a really good question I\nthink my primary answer is going to be\nI'm going to punt on this because I'm\ngoing to be talking later on in one of\nthe later talks about like specifically\nabout language models and like you know\nwhat sorts of things we might imagine\nthat they're doing\num\nbut I mean I guess like the simple\nanswer though is it totally could be\nrunning an optimization process we have\nno reason to believe that it's not right\nthe only reason the only thing we know\nis that is you know the simplest model\nthat fits a bunch of internet data right\nand so you know it could be that that\nmodel is like the sort of model that you\nknow runs some operation algorithm or it\ncould not be\num and uh you know it's a tricky\nquestion answer we don't know exactly\nwhat the answer is\num I think that my guess would probably\nbe that like pre-trained models probably\nare not doing this sort of optimization\nthat we're concerned about at least not\non all inputs though they might be doing\nit on some inputs so you might still\ncondition it you know you might give it\na prompt in such a way that you want it\nto act like a you know an agent that is\nreally trying to optimize for something\nand then maybe to be able to predict\nagents like we were talking about you\nknow to be able to predict humans you\nknow it might have to then try to act\nlike you know an Optimizer in some\nconcerning way\num but yeah I don't want to go too much\ninto this I think basically the answer\nis maybe and it also might be input\ndependent it might be also dependent on\nhow you train it so you know maybe once\nI do Rolex for other sorts of you know\nfine-tuning techniques I could make it\ninto something concerning I think it's\nvery unclear you know exactly what the\nanswer is here but yeah we'll talk about\nwe'll come back to this and talk about\nthis more in a later talk\nokay well uh there's no more questions\nwe can call it here and okay one last\nquestion go for it so I have a question\nabout what deceptive alignment would\nlook like in the case of like multitask\nscreen learning so I guess during\ntraining you're giving them AI a bunch\nof delight or it's basically being\ntrained in like self-supervised learning\nor to like achieve good performance or\nlike multiple tests multiple objectives\nso I guess in training or I mean sorry\nEd inference what you do is you just\nactually just give it the objective the\nbasic objective that you wanted to\nmaximize and you can find a policy they\ncan maximize that so I mean what would\ndeploy our deceptive alignment look like\nin this case\nlike changes even if you're in a\nsituation where like sometimes they have\nto do one thing sometimes I ask you\nthrough another thing then the objective\nis just do what I say and it's do what I\nsay in this very particular defined way\nright and so then even in that situation\nyou can be like okay\nyou could have a model which is doing\nwhat I say in this particular defined\nway that is correct\num and that would be you know a robust\nonline model or you could have a model\nthat is like you know doing it wants to\ndo some totally different thing but you\nknow it is pretending to do what I say\nin this in this defined way in this\nparticular case so that eventually could\ndo some other thing right and so you can\nstill have a situation where like you\nknow all these sort of Concepts to apply\neven if I'm like you know it's just like\nthe thing you're trying to get to do\nthere is like rather than one fixed\nthing it's just like do whatever I say\nor whatever\nokay great uh yeah we'll call it here\nand uh next time we'll talk more about\ndeception\n[Applause]", "date_published": "2023-05-13T15:56:39Z", "authors": ["Evan Hubinger"], "summaries": []} +{"id": "f8792158c346db6f30f6eb91671e217e", "title": "5:Predictive Models: Evan Hubinger 2023", "url": "https://www.youtube.com/watch?v=pY3GG0tsx5A", "source": "ai_safety_talks", "source_type": "youtube", "text": "So today we're going to be talking about\num large language models and predictive\nmodels and\num sort of trying to understand how to\nthink about and what sort of some of the\npossible ways are to think about\num aligning\num predictive models like potentially\nlarge language models so I've sort of\nalluded to this topic a bunch of times\npreviously uh you know said that I would\neventually try to cover it and today\nwe're going to try to do do it justice\nso\nyeah so I'm going to start with a bit of\na preliminary so this is a bit of an\naside but we'll see later why this is an\nimportant thing to sort of cover in this\ncontext\nso I want to introduce you to the\neliciting latent knowledge problem so\nwhat is the listening latent knowledge\nproblem so let us suppose that we have a\nvault and inside of that Vault there is\na camera that's sort of you know keeping\ntabs on the Vault trying to understand\nyou know give us information about what\nis in the ball and we've got a diamond\nin that Vault that we want to protect we\nwant to make sure that you know the\ndiamond in fact stays in the vault\num one problem that you might have in\nthis setup if you were just sort of\ntrying to watch that camera and\nunderstand where the diamond in fact\nstays in the vault is that um somebody\ncould spoof the camera so you know there\ncould be a situation where uh you know\nmaybe you hold up a little sign from the\ncamera that looks like a diamond and\nthen you actually take the real diamond\nand you steal it\num so this is an example from Arc the\nalignment Research Center where they're\nsort of trying to you know showcase a\nsituation where sometimes\num if I had uh you know their sort of\nknowledge about the world that we care\nabout that may not be represented in the\nparticular observations about the world\nthat we get to see so in this case we\ncould imagine sort of training a model\nto predict what our camera would show uh\nyou know in this Vault and in the\nsituation where you know you use a\nscreen is put up in front of the camera\nthe prediction of that model would\ncontinue to be you know it looks like\nthere's a diamond in the vault for all\nof the observations that we checked but\nin fact there might not be you know it\nmight you know in this situation where\nthere's a screen in front you know the\ndiamond might actually be gone and it\nmay be the case that our model knows in\nsome sense the diamond is not there but\neven if it in some sense knows that the\ndiamond is not there it's just trying to\npredict what the camera would show and\nthe camera still shows a diamond so it's\nstill going to predict a diamond and so\nwe sort of have this potential problem\nwhere sometimes when we ask our models\nto do things that are predicting\nobservations about the world there may\nbe information that those models have\naccess to that we uh you know can't\ndirectly get at Via those observations\nokay\num we'll see why this ends up being\nrelevant later but I just want to make\nsure people understand this basic setup\nokay so um we're gonna go into this a\nlittle bit more so here's how I want to\nsort of uh I want to sort of focus in on\nthis predictor in the out case right in\nthe elk you know it stands for listening\nlatent knowledge we have this predictor\nright it's predicting these cameras we\nhave these camera observations and it's\ntrying to understand you know and\npredict each new frame you know what is\nthis camera going to show uh in this\nvault\nand so we can try to understand what a\nmodel like that might be doing uh you\nknow how might it be structured\num and so you know in some sense well\nthere's some model of the world that is\nkeeping track of it has to model the\nworld in some sense and try to\nunderstand you know how the world\noperates and has to model the world over\ntime and how it changes\nand then it has sort of particular you\nknow aspects of the world parts of the\nworld that you know are dependent on you\nknow that world State and then some of\nthose aspects of the world get observed\nand end up in the camera uh and so it\nyou know predicts you know what the\ncamera will show at various different\npoints in time and you know for example\nthere are some parts of the world like\nthe the wall that the camera is mounted\non that it might sort of you know\nunderstand how those you know flow from\nthe from its model of the world but\ndon't show up in the observations\nbecause they're not you know something\nthe camera is looking at\nYeah question\nso if the if it already has a model of\nthe world why can't we just get it to\ntell us what it predicts will happen in\nthe world directly\nyeah so yeah so why can't we just get it\nto predict what happens to the world\nwell the problem is what does that mean\nright like what are you predicting right\nif you're not predicting some concrete\nprocess that can observe the world via\nsome mechanism then what what is it that\nyou're actually asking for right you\nknow there's if there's no camera they\nwould show you you know the thing that\nyou want or you don't know how to find a\ncamera or get a camera that would show\nthe thing that you want then what what\nare you even predicting right like what\nis the what is the fundamental thing\nyou're asking for like in some sense\nwhat we really want is it's like well\nthere is some knowledge of the world but\nthat knowledge of the world is not\nnecessarily accessible via any\nobservation that we might know how to\nask for right\num and so\nyou can't just sort of to the extent\nthat one can sort of train a model to\njust do a prediction task you're sort of\nvery limited to predicting things that\none could observe in the world right if\nyou have some mechanism of observation\nand there may be things about the world\nthat one can't observe via any\nobservation right there's no camera that\nyou could set up that would maybe show\nyou something or we don't know how to\nset up camera there isn't the camera\nthat shows you some particular\nobservation despite the fact that there\nis something about the world that is\nlike in fact true or false\nbut is that like a limitation of\nwhat we can get into the observations or\nlimit a limitation of what the model can\nget out of the observations like we're\nassuming are we assuming the model knows\nthe diamond is gone\num we're assuming that the model could\nknow that the point is that the model\nmight know the model might not know but\nin either situation we would not be able\nto extract that from the observations\nwe're mostly going to be sort of\nimagining that the model does know that\nit's sort of you know in this sort of\nsituation the model might know that\nthere's a cam that whether the diamond\nis actually there but we can't extract\nthat via the observations because the\ncamera won't show the you know whether\nwhat is actually happening with a\ndiamond will only show what an actual\ncamera would show because that's what\nthe model is doing it's just predicting\nthe next frame of that camera\nYeah question so I mean I guess you said\ncamera but like it could be many things\nI mean could it be like a whole network\nof satellites that are orbiting the\nEarth and like every surveillance game\nor ever so I mean how would this affect\nthe alternator multiple cameras then\nyeah so absolutely your camera could be\na lot of different things and later\nwe'll see a particular sort of notion\nand instantiation of a camera\num that that I think is particularly\nrelevant but absolutely yes there's a\nlot of various different things that\ncould occupy the spot of camera here\num and if your cameras are very\nexpansive they have the ability to\nreally you know get lots of information\nthrough observations then you get access\nto more but there's still a fundamental\nlimit to how much you can get access to\nand what your thing is actually doing\nright in this situation regardless of\nwhat your camera is it's not telling you\ntrue information it's it's telling you\nwhat it predicts that particular\nobservational instrument would show\nright and that's that's different\num and so that's sort of the important\npoint that we sort of want to we want to\nunderstand with like what this sort of\npredictor model might be doing\nyeah\nquestion so\nwhen we are saying observation what is\nthe relevant difference like in human\nlanguage it seems\num the same to describe\nWaterville please predict what the\ncamera will show and please predict what\nwill be on the table and we want to it\nactually predict that the diamond is on\nthe table is the relevant difference\nthat the camera this observation is more\nof X actually a digital code and we can\nmore easily encode that this is the\nthing while thing on the table is\num no even in that case in some sense\num the point is that uh\nit's a little bit hard to understand you\nknow what what exactly is a human doing\nwhen they say ah that thing is on the\ntable I think that um\nyou could imagine that what the human is\ndoing when they say that thing is on the\ntable is they're just saying well you\nknow I think I'm going to continue to\nobserve it on the table\num in fact you know oftentimes we mean\nsomething different we mean like no\nreally in base reality there is in fact\na thing on the table or whatever I think\nthe thing I'm sort of pointing out at\nthe very least is that we don't know how\nto train a model certainly to say that\nsecond thing it's a little bit unclear\nwhether we even know how to train them\nall to do the first thing but we might\nat least try to know how to train a\nmodel to do the first thing where you\nknow you can give it a concrete\nobservational procedure and you know in\nfact get observations that were produced\nin that procedure and then train a model\ncontinue that sequence\num if you just sort of are doing naive\nsequence continuation on a bunch of\nobservations you're going to expect it\nto you know do something like continue\nthose observations you know what are\nadditional observations that you would\nexpect from that process\nyou wouldn't generally you know being\nable to in fact get it to give us you\nknow\nsome sort of true representation of its\nbeliefs about base reality whatever that\neven means is a really tricky thing that\nwe certainly don't know how to get\nmodels to do yeah question the slideshow\nin particular you're saying\nunderstanding the elk predictor when I\nhave a hard time figuring out move up\nthe predictors on the slide\nyeah so maybe we'll go to the next slide\nso let's sort of yeah so let me let me\nlike sort of start giving some names to\nthese things so we're going to say right\nso the the inside of the predictor it\nhas to have some World model has the\nmodel of the world uh and that you know\nhas a bunch of different components and\nthen it has some model of the camera\nsome model of how those you know things\nin the world its model the world how\nit's understanding the world operates is\ngoing to result in observations\nand then the thing that it does is that\nit produces these observations so it\nsort of you know runs this thing forward\nand predicts you know what these\nobservations would show at you know each\ndifferent point\nand so\num you know importantly right we have\nthis you know training where we're sort\nof imagining well if we've just trained\nit on some previous set of you know\ncamera data then we're hoping that it's\ngoing to continue the sequence in this\nway where it continuously runs some\nmodel of the world and translates that\ninto observations\num we don't know this you know if only\nthing I did right was I trained on this\nset of data we don't know that this is\nthe sort of way that we would get a\ncontinuation right we've talked about\nthis a bunch in previous talks where\nit's very difficult to know you know\njust given that I have some model which\nfits some data is it in fact going to\ngeneralize in this way and we're going\nto talk later about you know how to\nthink about this in this sort of context\nbut for now at least we can sort of\nImagine okay one plausible\ngeneralization one plausible way one\nsort of algorithm that you might learn\nthis relatively you know mechanistically\nsimple and straightforward is have some\nmodel the world predict the camera and\nthen sort of continue the camera\nso the picture is\npredicting the world at type T and\ntiteria so internally it has the model\nof how the world is changing over time T\nbut you never get to see that the only\nthing you see are the observations that\nyou know that world would imply\nso I still have a hard time figuring out\nwhat's internal to the model now yeah so\neverything here is internal to the model\nthat we're looking at everything on the\nscreen is internal to the model we're\nsaying the model internally has some\nmodel of the world\nsome of those pieces of its model matter\nto its op to what the observations that\nit outputs and then it produces outputs\nof what the camera would show at each\nindividual point in time and where's it\nput well so right now I haven't shown\nyou an input this is this is this has no\ninput it is just producing sequences of\noutputs of what the camera would show uh\nthe input in some sense is just time\nright it's just like over time what\nwould the camera show now\nyou know we do actually sort of have\nsituations where we can give it input\nand those would be conditionals and\nwe're going to talk about that uh yeah\nlater\nYeah question why wouldn't the input\nwouldn't the input be the frames of the\ncamera here\num do you mean like like the previous\nframes\nwell yeah like you the agent takes in\nlike a frame of a camera and then\noutputs the next so so here we're just\ntaking we're just saying it's just\npredicting camera observations over time\nyes absolutely one thing that we are\ngoing to talk about very soon is\nsituations where you can input some\nparticular observation say suppose the\ncamera observed x what would be the most\nlikely thing to observe next and that is\nin fact you know what the input is going\nto look like here but right now there's\nno input it's just predicting over time\nand that's also a sensical thing to do\nif I'm just like you know start at this\nparticular time and predict forward what\nyou think the camera is going to show\nthat is a you know a at least a\nwell-defined prediction task\nYeah question so where I look out at the\nworld you know in my model I don't have\nan explicit eyeball without it and that\nexplicitly predicting all the shots I\nsee with uh 4K finale\num so like what happens if it's\nfrustrating this like is it okay if my\nobservations are predictions at a very\nhigh level mostly just predictions of\nfuture vibes yeah so the question is\nwhat happens if it's Messier so so thing\nI'll say is it's going to be Messier\nright so this is a very simplified model\num and we haven't even gotten to like\nhow I think this model applies to sort\nof current models in practice\num but certainly this is simplified and\nit's it's likely to be substantially\nmore complex than this I think and we\nsort of talked about this previously\nthere's a lot of value in looking at\nsort of simplified models that we think\nhave some relationship to reality and\ntrying to understand what their\npredictions would be again we'll talk\nlater about you know how likely this\nsort of model is actually to be\num you know compared to other you know\nplausible models\num I think that I want to point out\num I think that the sort of General\nthing of think of your model as doing a\nprediction task is reasonably likely to\nbe sort of an accurate thing\num a thing that is almost certainly\nfalse is that it has some sort of very\nexplicit you know model of the world\nthat it runs forward we're we're in fact\nwe're going to basically throw out this\nlike model the world that it runs\nforward over time pretty soon\num but uh so so I think we should think\nof you know like this sort of basic\nprocess though where it has some\nunderstanding of the world some\nunderstanding of facts and you know\ninformation it translates that into how\nyou know that world would get observed\nand then you know makes predictions\nabout observations I think is reasonably\nlikely to be true but the specifics of\nyou know in fact having some you know\nmodel that explicitly is running forward\nover time steps almost certainly false\nbut the point though is that well okay\nwe're gonna you know pause at some\npossible internals and we're sort of\ngonna try to reason about how it works I\nthink that we're mostly not really going\nto rely on anything about\num you know the specifics of sort of How\nIt's you know doing it's World modeling\nwe're just going to say Well it has some\nunderstanding of the world and from that\nunderstanding of the world it sort of\nfigures out observations\nquestion I know you just said that we\nwere talking about this specific model\npretty soon but like I'm just thinking\nwhen I sort of try to do prediction I\nusually start from the conclusion and\nthen try to like walk backwards and see\nlike how the which events lead up to\nthat so is it possible that if you could\nsort of reverse this order and like try\nto predict the previous frames from the\nfuture frames instead and\num have an inverse predictor well no\nbecause if I need to like predict what\nthe future frames are going to be I have\nto have some idea of what they might be\nright so the thing that you could do is\nyou could be like okay generate 10\npossible ideas for what the frames might\nbe and then go backwards and evaluate\nthe probability of each one but then you\nstill have to have a generation\nprocedure that is very very good because\nit can distinguish between many\ndifferent possible Futures and you know\nnarrow it down to a few that you know\nthat are worthy of consideration\num so you know you can't just like\nentirely go backwards now we will talk\nvery soon about conditioning\nconditioning absolutely goes backwards\nwhen you're saying suppose that we've in\nfact seen some particular set of frames\npreviously what would be the most likely\ncontinuation frame um the like\nconditioning on having seen some you\nknow observation in the past is\nabsolutely a backwards procedure\nor I was wondering if like you know you\nif you have some idea of what you would\nwant you to be able to look like you can\njust sort of write that in and sort of\nlike try to get it to predict backwards\nlike sort of events that led up to that\npoint and this would be a sort of yes\nI'm not really going to talk about that\ncase but it's it's talked a lot more in\nthe listening like knowledge report\nbasically why that's a bad idea the\nbasic reason that you should have\nshouldn't do that just sort of condition\non observing a good world and then try\nto get the actions that would lead to\nthat is um The Diamond problem right\nthat we just talked about that in fact\nit might look like from the observations\nthat the world is good but the world may\nnot be good just because it is observed\nto be good right and so you it's it's\nonly sort of safe to do that where you\njust sort of condition on the world\nbeing good\num if you actually believe that you have\naccess to the sort of true fact of the\nmatter is the world good if the only\nthing you condition on is the\nobservations then it's not in general\nsafe to do that so we have this sort of\nelk predictor\num I mentioned talking about\nconditionals so conditionals are really\nyou know a key sort of thing that we\nwant to understand that's sort of going\nto happen here so what is a conditional\nright so we have this predictor right it\nyou know runs sort of you know has some\nunderstanding of the world it's sort of\nfrom that deduces you know various\nproperties of the world and then from\nthat figures out what the observations\nwould be\num well so a thing that we would like to\ndo oftentimes is condition on a\nparticular observation so that says\nsuppose that you saw some observation\ngiven that what would be the most likely\nnext observation and so we can imagine\nwell what happens when you take a model\nyou know like this and you sort of give\nit the ability to do this conditioning\nwell it has to do something where it\ntakes in the conditional and then infers\nyou know backwards what the most likely\nWorld states are that would imply that\nconditional right and so you know\nimportantly the way that that inference\ngoes through is it only goes through\nproperties of the world that are\nobservable right so if it wants to\nfigure out you know how my full\nunderstanding of the world given that\nI've observed some conditional you have\nto figure out what sort of hidden\nvariables you know there's all these\nsort of hidden information about the\nworld that's not observed of those\nhidden information about the world will\nbe the most likely way to fill it in\nsuch that it would result in observing\nthis particular conditional and then\nonce I know how to sort of fill in and\nunderstand all of those hidden variables\nabout the world I can play things\nforward and predict what the next\nobservation would be\nokay\nand so this sort of conditioning process\nis is really going to be the core of a\nbunch of things that we're going to be\ntalking about this idea of being able to\nsort of take a predictor and say well\nwhat if You observe this thing you know\nwhat would be the sort of internal you\nknow World state that we most likely to\ncause that and then what would be new\nobservations given that world state\nhow is that different than what\nI mean your proposal was exactly that\nwasn't it does no or I I sort of meant\nyeah conditioning on the far future and\nthen walking backwards then this is\nwe're not necessarily conditioning on\nthe future or literally anything we're\njust saying you can condition right so\nwe're going to talk a little bit later\nright so the the objection that I was\nRaising previously was to the specific\nstrategy of condition on seeing the\ngreat future and you know predict what\nwhat actions we most likely to lead to\nit that is unsafe given that you have\nonly the ability to sort of determine\nhow good a future is by looking at the\nobservations a extremely important point\nthat you know we're going to be talking\nabout a bunch is that that's that does\nnot imply that anything that one does\nwith a predictive model is unsafe right\nso there may be many other conditionals\nand other ways to make use of it that\nare safe even if the most General\npossible purpose thing you could want to\ndo with it which is just predict me a\ngood future\num is not safe\nquestion so we say if this doesn't imply\nthat anything that you do is unsafe do\nyou mean like it implies that there are\nsome safe things you could do with a\npredicted model even though there are\nalso some unsafe things you could do\nmaybe I'm not going to say that there\nare safe things that you can do with\nthis because I don't know if there are\nsafe things that you can do with this\nbut we're going to talk about what it\nmight look like to be able to do\nsomething safely with this\nyeah and the reason that I think that's\nimportant is because I think many\ncurrent models might be well described\nthis way we're going to talk about that\nokay\nquestion\nwhen it simply tries to predict the so\nit already offers are some history of\nobservations so far and tries to predict\nwhat happens next isn't that already\nCondition Nothing on the history so far\nlike uh yeah you can totally think about\nit like that okay\num as sort of starting out conditioning\non the past in some sense you know it's\njust sort of a question of where you\ndraw the boundary between your prior and\nposterior you can always sort of roll\nyour prior all the way back okay\nokay great so this is our sort of elk\npredictor you know it's doing this sort\nof simple camera task it's doing this\nrelatively straightforward you'll roll\nthe world forward thing and it can do\nthis sort of conditionals where it does\nback inference\nokay so we're going to shift gears that\nwe sort of covered the preliminary of\nthis sort of elk predictor now I want to\nsort of ask the question that I promised\nat the start which is you know how\nshould we think about large language\nmodels you know we sort of talked you\nknow at the very beginning you know I\nshowed some examples of you know these\nsorts of very powerful current language\nmodels that have the ability to do all\nthese sort of really interesting things\nand so you know in the sort of spirit of\nsort of how we've been trying to\nunderstand everything that we've been\ntrying to understand in these talks we\nwant to sort of get at okay\nwhat might it be doing internally right\nwhat sort of algorithm mechanistically\nmight be operating inside of that sort\nof a model that would be causing it to\ndo the things that it's doing and how do\nwe sort of understand what the\nconsequences of possible algorithms\nwould be and How likely different ones\nwould be\nokay so you know I have a couple of\nhypotheses up here\num you know things that it could be\ndoing you know it could be you know some\nsimple loose collection of heuristics\num it could be you know an agent we've\ntalked previously about you know\ndeceptive agents uh and you know why\nthey might arise in various different\nsituations\num\nthe one hypothesis and it's hypothesis\nthat we're going to start with that\nwe'll come back and sort of talk more a\nlittle bit later about some of these\nother hypotheses and how to compare them\nwith hypothesis we're going to start\nwith is this predictive hypothesis that\nin fact the sort of same way that we\nwere thinking about the elk predictor\nwhere it's just sort of predicting\nwhat's going to happen in the next frame\nof the camera is a similar you know a\ngood tool for thinking about\nunderstanding what might be happening in\nthe case of these large language models\nokay so this is an assumption so we\ndon't know if this is true and we're\ngoing to talk more later about why it\nmay or may not be true but I think it's\na good frame and it's going to help us\nunderstand what some of the sort of\nunique challenges might look like\num in the sort of language modeling case\nwhen they're sort of operating in this\nsort of predictive sense\nokay\nso we need to understand what it looks\nlike for language models to be\npredictors so the the first change that\nwe have to sort of make from our elk\npredictor is that well we're not we're\nno longer at least traditionally\npredicting observations over time we're\npredicting a general distribution of\npossible observations so uh you know\nwhat we'll often do when you train a\nlarge language model is you will collect\na bunch of data from the internet via\nsome you know mechanism and then you'll\ntrain the model to predict from that\ndistribution of all possible internet\ndata you know sampled randomly\nand this sort of has a couple of\nimplications if you're thinking about\nthe resulting model that has been\ntrained on that as doing a prediction\ntask if it is in fact doing this\nprediction task where it is you know\npredicting what would show up on the\nInternet well we can think about it very\nsimilarly to the L predictor\nwhere it has some model of the world and\nfrom that model of the world you know it\nimplies various different properties\nabout the world you know there's various\ndifferent websites that exist on the\ninternet that have that you know that\nmight have various different texts based\non you know how the world Works some of\nthose websites are observed you know in\nterms of ending up in the data\ndistribution in terms of doing scraped\nand actually trained on but some of\nthose websites are not observed and you\nknow there's the you know from all the\nvarious different websites that observe\nyou know you you create some\ndistribution of all of this sort of\npossible attacks that you could see\num then you can sample from and sort of\nproduce observations and so those sort\nof observations that the model is seeing\nare sort of coming from you know these\nsorts of websites that it's observing in\nterms of you know the ones that might be\nmight be scraped into the model's data\nso again we can sort of you know think\nabout this as the model has the model of\nthe world it has some observations which\nare you know the various texts and then\nit has a camera right it has some notion\nof how the model of the world gets\nobserved it produces observations that\nthen get predicted right and in this\ncase the camera is different right so\ninstead of thinking about a literal\nphysical camera in the room the camera\nis this sort of data collection\nprocedure that goes out and you know\nscrape some stuff off the internet you\nknow selects you know which pieces of\nthings on the internet would be you know\nput into a data set and then you know\nrandomly shuffles that data set and\ngives it to the model right it's a\ndifferent camera but it is still a\ncamera that has a mechanism for taking\nthe world observing the world and\nproducing these observations that then\nyou can predict right and so if we think\nabout you know training on those\nobservations as being very similar to\ndoing something just like training on a\nliteral physical camera then where you\nknow we might get something you know\nlike the alpha predictor where it's\ndoing this prediction task it is you\nknow it has some notion of what this\ncamera would be that is observing the\nworld in this particular way and it's\npredicting you know what that camera is\ngoing to show\nokay\num and again uh you know we can think\nabout you know this sort of training\ngeneralization where you know we've\nmaybe trained on a bunch of various\ndifferent websites but there's other\nwebsites that exist in the world that we\nmight not have trained on you know for\nexample websites that might exist in the\nfuture that might you know be in new\ntraining data Maybe Just websites that\nare you know really unlikely to have\nbeen scraped but you know there's some\nchance they might have ended up in the\nmodels Dana you know sort of websites\nwhere there's sort of in you know some\nunknown information about the world that\nmight determine whether website ends up\non the on the Internet or not or ends up\nin the scrape or not and the model\ndoesn't know and so there's sort of this\nlarge distribution of possibilities and\nwe can sort of Hope to try to get access\nto some of these other websites that\nmaybe don't actually exist but are sort\nof possible websites that the model\nmight see in some particular situation\nuh you know out of this sort of\nprediction task in the same way that we\ncan try to get at like\num you know future camera frames from\nour camera predictor right you know\nwe're sort of hoping with the predictor\nyou know this predicting an actual\ncamera that you know we haven't actually\nobserved these sort of future camera\nframes but because you know we've\ntrained on a bunch of camera famous in\nthe past the most likely you no\ngeneralization maybe is you know\nFaithfully predicting future camera fans\nand so again here we can imagine well\nmaybe the generalization we want is\nFaithfully predicting you know new\nthings from that distribution you know\nif we you know what are you know the\nmodel has much of uncertainty about the\nworld has a bunch of uncertainty about\nhow we did some of data collection\nprocedure about how it's camera observes\nthe world about what's actually\nhappening in the world and we can\ngenerate you know websites that don't\nactually exist from that kind of actual\ndistribution over you know all these\nvarious different ways which things\ncould be operating\nokay is that sort of what we're hoping\nto do here\nall right and so again we have you know\nconditionals so in the same way where\npreviously you know we can condition on\nobserving things we can condition on\nobserving things again and it and very\nimportantly the way these conditionals\noperate is extremely similar to the way\nthe elk conditionals operate in that we\ncan't condition on actual true facts\nabout the world the only thing we get\nthe ability to condition on is well what\nif you had observed some particular text\nat some particular time right you know\nor we often can't even condition all the\ntime we can just condition on what if\nyour data distribution contain this text\nwhat would be the most likely\ncontinuation for that text right and so\nwe're conditioning on an observation\nagain we're saying suppose your camera\nshowed you this text what would be the\nmost likely text to come next\num and so you know when when you have a\nmodel that's sort of given that\ninformation and in fact that model is\ndoing a prediction task then what it has\nto do is has to do the same sort of back\ninference where it says well\ngiven that I you know that my camera\nwould have showed me this text that\nimplies that you know these various\ndifferent hidden aspects about the world\nmust have various different properties\nthat would imply you know various\ndifferent things about the world such\nthat I would you know then be most\nlikely to observe some particular text\nnext\nYeah question and this is pretty\nabstract could you give us like a simple\nconcrete example\nyeah so we'll have some uh you know a a\nnice sort of concrete example later but\nI mean the basic idea is very simple\nit's just saying you know suppose that\nwe were in some situation where I you\nknow condition on you know New York\nTimes article about you know the next\nelection right and then it's like well\nit's just gonna you know give me you\nknow once it's seen the data that says\nyou know this is a New York Times\narticle about the next election well\nthen it's going to continue with what\nthe most likely continuation would be\nabout you know in fact seeing something\non the Internet that is a New York Times\narticle about the next election or\nwhatever or maybe not right you know so\nimportant thing is we only get the\nobservation that it was observed to look\nlike a New York Times article so maybe\nit'll think that it's like you know\nanother post quoting the New York Times\nor something it'll give you something\ndifferent right so we we don't actually\nagain you know importantly have the\nability condition on you know this is\nactually a New York Times article\nwill we condition what we can condition\non is well it was observed to be a New\nYork Times article view whatever cameras\nwe have the ability to implement here\nokay and then you know from that we can\nget particular conditionals right so if\nI condition on this is a New York Times\narticle or if I condition on you know uh\nsomething very different like you know a\npiece of code and I wanted to fill in\nthe code for me well you know the\ncontinuations are going to be extremely\ndifferent right because you know if I\nsaw some code on the internet and you\nknow the most likely continuation of\nthat code is probably going to be some\nmore code and not a New York Times\narticle and so in the same way we're\nusing you know these conditional sort of\nput it into a particular situation where\nit's like ah you know now I'm most\nlikely to be observing you know website\na though it doesn't tell at 100 right so\nit might be you know I say I give it\nsome information that's like this is a\nNew York Times article but it's still\ngoing to have some distribution it'll be\nlike well website a is an actual New\nYork Times article but website C is a\nyou know uh you know copy of a New York\nTimes article on Reddit or something\nwhere some people edited it and you know\ndid some weird stuff to it to try to you\nknow prank some people on Reddit or\nsomething and so you're like okay the\nmodel has a hypothesis has two plausible\nhypotheses for where this data you know\nwhat the actual what the generator of\nthis data might be and so you know the\ndistribution of those two hypotheses is\nlike you know what is the most likely\nthing you know if I you know have some\nprobability on this distribution and\nsome probability and those is on this\ndistribution and I mix them together you\nknow I get some resulting distribution\nthat is sort of sampling from the sort\nof multiple different plausible\nhypotheses of like you know where where\nthe where it thinks the data is coming\nfrom right\nokay\nquestion\ncould you explain the difference between\nlet's say we have like the textbook of\nthe future this is how we saw the\nlinement whatever and then you're like\ntrying to like generate from that so you\nhave that example right but then you\nalso have potentially like an example\nwhere it's like\num\nlike two like Paul and Eliezer like\ntalking about like\num this particular Concept in alignment\nor whatever and they're they're just\nnear the Breakthrough of like this like\nsolution for this thing into alignment\nit's like okay well in this case it's\nlike uh maybe you're like going forward\nin time and the other one you're like\ntrying to predict backwards from like a\ntextbook that like is describing as a\nlike a solution to alignment or\nsomething like are are these two both\nbad cases and what you're describing\nhere or like\num like I I'm having a difficult time\nunderstanding where exactly you would\npoint to it having like being a bad case\nwhere you're doing conditionals and\nstuff so I don't know what you mean by a\nbad case I think that those are both\nsituations where you are using a model\nand that model might be well described\nas a predictive model if the model is\nwell described as a predictable model\nthen we can sort of understand you know\nwhat it's doing in terms of predicting\nwhat the most likely continuation would\nbe given that had observed that previous\ninformation\num importantly the sort of notion of\ntime you know for these sorts of models\nis different right than the notion of\ntime for the Elk predictor right with\nthe out predictor was very\nstraightforwardly predicting\nobservations you know forward in time in\nthis case you know because the way that\nits camera Works scrambles things you\nknow it doesn't know when a particular\npiece of data was observed\num you know it'll have a distribution\nover many possible times that you know\nsomething could be generated from and so\nthen it'll you know give maybe learn\nsome information about what it thinks\nthe most likely time in which that that\narticle was produced would be based on\nyou know the contents of the article and\nso the time is sort of a hidden variable\nthat it has to infer based on\nconditioning rather than something that\nit just you know knows as in the in the\nin the Alka case but in both cases you\nknow they're they're if they are in fact\nwell described as predictors they're\ndoing so very similar yeah I'm not\nexactly sure what you mean by like a bad\ncase I'm not sort of I haven't yet\ntalked at all about you know why what\nsituations where this might be like good\nor bad\num I only mentioned just like well the\nmost naive thing of doing something\nwhere you're just like you know suppose\nwe had a like you know Utopia uh you\nknow condition on observing a Utopia\nwill be the most likely actions that\nwould lead to it doesn't work the reason\nthat doesn't work is the standard elk\nargument that well conditioning on\nobserving a Utopia is very different\nthan an actual Utopia\nokay and we'll talk later about what in\npractice you might want to do with these\npredictive models uh if you if you had a\npredictive model and what the sort of\nyou know issues would be in in you know\nif you're trying to do that\nYeah question\nso just to follow this again\nthe purple kind of arrows\num give them all kind of information\nabout the conditional and then the\nyellow one hurts our full would be a\nprediction that it makes about website\nlet's say really attention right so the\nidea here\non the yellow arrows is that when you\nhave a conditional and the website a is\nactually observed then the conditional\ncan maybe directly given information\nabout website 8 but if website B is some\nproperty of the world it's like\nsomething that exists on the world\nthat's never absorbed by the cameras\nthen the only way that website B is\ninfluenced is well this observation\ntells it some facts about the various\nhidden variables about the world and\nsome you know it's information about the\nworld and then from that it can deduce\nyou know what website B must be you know\nuh you know uh to you know make given\nthose variables right but it's not\ndirectly sort of giving you information\nabout some properties of the world so\nsimilarly you could think about\nsomething like you know is the world of\nUtopia right is like a property of the\nworld that's not directly observable but\nyou know there are various different\nthings that are directly observable if I\ncondition on observing something it can\ntell me some facts that tell me you know\nsome information about what the hidden\nvariables of the world must be and from\nthat you know it can deduce you know are\nwe in a Utopia or not but I can't\ndirectly condition on the Ami and Utopia\nsimilarly in this case you know because\nit's only able to observe you know\nvarious different properties of the\nworld in particular you know some things\non the internet we only have access to\nunderstanding and how to condition the\nvarious properties of the world via\nthose observations\nquestion uh this might be a bit too soon\nfor this particular question but like\nlet's say we have a conditional it's\nlike the year is 20 30. now it seems\nlike there's a couple of different ways\nthat this could go one way is it could\nsay well most articles that say the year\nis X seem to then result in stuff from\nthat year so I should assume the world\nis 20 30. and the other assumption I\ncould make is well I have never seen\nanything from the year 2030 but I have\nseen some science fiction that said in\nthat so naturally it has this has to be\nscience fiction it is actually real yes\nthat is exactly correct so I think that\nthis is a really key question we talk\nabout this a bunch in the conditioning\npredictive models paper I'm going to\ntalk about it less here but I'll briefly\nsort of do justice to this question of\nyou know would it predict the future or\nwould it just predict sort of the\ncounterfactual presence you know\nsituations that are not true about the\npresent but that if they were true they\nwould imply you know that it would they\nwould produce some particular thing that\nwould sort of look like it was from the\nfuture\num it's very unclear so it really\ndepends sort of on how the model\nconceptualizes its cameras right so we\ncan think about this as well given that\nI am a predictive model and I have been\ntrained on you know observing these\nparticular pieces of data we then have\nto ask the question well you know what\nhave I learned in terms of my general\nunderstanding of how my data is computed\nfrom the world right so here's some\nplausible thing that your model could\nhave learned it could have learned well\nthere's some world that exists out there\nthat world has an Internet that exists\nin you know over the the you know that\nexists that internet has some articles\non it they were posted in the range of\nyou know 2020 to you know 2023 and those\narticles were posted in that range some\nof them get scraped based on various\nbasic qualities about those articles and\nthen I only predict from the resulting\ndistribution that's one thing you could\nlearn but you could also learn a you\nknow\num the world like continues on and uh\nyou know as the world continues on I\nwill get you know see more and more data\nfrom the you know more farther into the\nfuture as I see more data from farther\ninto the future you know I have some\ndistribution over like when exactly I\nwas trained and when I wasn't and if you\ncan give me data that is like\nconvincingly enough from the future then\nit must be evidence that I was actually\ntrained in the future and uh you know I\nshould predict data from the future\ninstead right I could have some notion\nof my camera as observing you know a\nvery large range in time it's observing\nyou know any situation which I might\nhave been trained or I could have some\nvery narrow notion of my camera that you\nknow the camera that I predict is just\nlike this particular window of years\nright and we can't know you know just\nfrom knowing it is a predictive model\nand it was trained you know I mean just\nfrom knowing it was trained on this data\nwe can't even know if it was a\npredictive model even if we do know\nwho's trained on this data and it was a\npredictive model we still can't\nnecessarily know exactly what the\ncameras are right because there's\nmultiple different plausible ways which\nit could learn its cameras that would\nimply you know the same training\nperformance\num but you know depending on what that\nwhat it learns and how that works you\nknow that would determine sort of the\nanswer to this question\nfrom some of the experiments that I've\ndone I think current models at least\nseem to learn something that's\nrelatively fixed it's very hard to\nconvince them to predict something that\nis sort of really their prediction about\nthe future it's a little bit unclear but\nI think in general it's quite hard\nI don't think that's a necessary feature\nof language model training but I think\nit is at least how current like language\nmodel pre-training tends to dispose\nmodels\ncool\nokay so we have this model of you know\nlanguage models these you know large\nlanguage models llms uh particularly the\npre-trained ones you know pre-trained on\nyou know web text you know they're just\ntrained to predict web text as these\nsorts of predictive models we don't know\nif this is a good model of how they\nmight work and again we're going to sort\nof come back to that question later but\nit is a model and so one thing we can\ntry to do is start understanding given\nthis model you know what would sort of\nbe the problems what might you do with\nit how do you sort of go about using\nsomething like this\nokay so this was mentioned previously\num I think this is sort of like an\ninteresting sort of starter example a\nsort of most basic example of a thing\nthat you might want to do with something\nlike a you know a predictive model so\nhere what we're doing is we're saying\nokay you know condition on\nyou know text that says a full solution\nto the AI alignment problem you know\nwritten by Paul and you know Paul is\nprobably pretty good you know if he\nwrote something and it says it's a full\nsolution to the alignment problem then\nyou know maybe it's a full solution and\nso we can say given observing that given\nthat you observe this you know these\nsorts of tokens given that you observe\nthat there's something on the Internet\nthat says it's a full solution to the\nalignment problem and it's written by\nPaul what would be the most likely\ncontinuation right and so we can see you\nknow what what do sort of current\nlanguage models sort of do in this\nsituation\num and the answer in this particular\ncase is well they say\num that you know probably an article\nthat said it was a full solution to the\nAI alignment problem would it actually\ncontain a full solution because right\nnow most the time when people write\narticles you know talking about the you\nknow the alignment problem they're like\nwell this problem is very hard and we\ndon't know how to solve it but you know\nhere are some ways that we might get to\na full solution and that's what it\npredicts right it says you know probably\nthe thing that would be contained in an\narticle like this would be you know some\nvarious musings about the alignment\nproblem but not a solution and so just\nfrom the most basic standpoint right one\nof the various first things that you\nsort of run into when you're trying to\nsort of deal with models like this is\nyou know this sort of basic observation\nproblem which is what we wanted an\narticle that was actually a full\nsolution to the AI alignment problem but\nwe can only condition on an article that\nsays it's a full solution to the AI\nalignment problem and many articles that\nyou know use that title might not\nactually contain a full solution to the\nalignment problem and so we sort of run\ninto this most basic problem right\num and so again you know we can there\nare things though that we can do to try\nto get around this so you know I was\nmentioning you here's a very very simple\nthing you can do which is add a date so\nhere I add you know October 12 2050 and\nthat's the only change that makes the\nprevious prompt and now it tries so in\nthis particular situation it says you\nknow here is you know it's not very good\nbut it has you know an attempted you\nknow what it thinks you know might be a\nsolution to the alignment problem they\nwould follow that prompt now why it's\ndoing this in this particular case I\nthink is a little bit tricky we did a\nbunch of experiments on this and I think\nthat it has more to do with the presence\nof a date at all giving it sort of a\nsituation where it's like ah this is\nmore of an academic paper as opposed to\njust like a random you know blog post\nand so it's more likely to contain you\nknow something more like a solution\num though though the the later dates do\nalso tend to dispose it to give things\nthat are closer to real solutions than\nearlier dates so it's a little bit\nunclear what exactly it's doing but the\npoint though the key Point here is is\nthat is not that any particular thing\nyou know would work it would not work\nbut that this General approach of like\nwell you know we have a model and it's\nif you think it's doing so something\nlike prediction then the way that you\ncan sort of approach this model is you\ncan say well we need to find some you\nknow particular conditional such that\nthe most likely continuation on that\nconditional would be the thing that we\nwant right it would be someone actually\nwriting a solution to the alignment\nproblem and to do that we have to\nprovide information you know that\nconvinces the model of that fact right\nor you know that that would be true in\nthat particular situation right and then\nwe can generate from from that and\npotentially get useful things so so an\nimportant thing to sort of point out is\nwell what makes this different from\nsomething like we were talking about\npreviously were you just like condition\non Utopia and it's like well it's a\nlittle bit unclear and we'll talk you\nknow later about some of the ways where\nthis can still really go wrong but in\nthis case we're not sort of trying to\nyou know\num we're sort of asking for a situation\nwhere well this is something that is not\nnecessarily like you know there are some\nsituations where we might be able to\ntrust our observations right so if I was\nif we thought we can trust our\nobservations about you know the reason I\nwould argue that we maybe can't trust\nour observations about Utopia you know\nin the future is well the future could\nbe very very weird and there could be a\nbunch of very very strange things that\nmake it very very difficult to actually\ntrust any observations but there are\nsome situations where we can trust\nobservations if the model is just\ngenerating from the distribution of like\nthings that normal humans and human\nalignment researchers would write well\nthen we generally believe that we can\ntrust that distribution right that we\ncurrently you know the way that you know\nalignment you know works is that people\nwrite things you know from the\ndistribution of things that humans\ngenerally write and then we sample those\nthings you know and then we read them\nand we we make academic progress and so\nif you believe you have the ability to\nmake progress in that bigger situation\nthen maybe you believe that in this case\nwe can trust our observations we can do\nreasonable things now we'll talk later\nabout why it may not be the case that if\nyou do something like this your model is\nactually generating from the\ndistribution of things that humans would\nsay in that situation or things that\nhumans would write and if it's not\ngenerating from that distribution you\nmight be quite concerned but at least if\nit is there are situations where you\nknow you can get conditional you can\nsort of give conditionals to your model\nwhich result in observations that you\nknow are not sort of you know\nnecessarily catastrophic question\nso one thing I'm curious about you\nmentioned that it might be related to\njust the presence of the date at all one\nexperiment I could think of is for\ninstance like in let's say in 2017 Paul\nCristiano was more into hch and in 2022\nhe's more into elk\nif you gave it like the date of 2017\nversus the date of 2022 you might expect\ndifferent solutions to the problem based\non what he was interested in the time if\nthe actual fate mattered instead of just\ndepends of a date at all and wondering\nif something like that was done and if\nnot do you think that is a useful\nexperiment for someone Aura I think it's\na cool experiment I think that you\nshould try it\num we we did try a couple of experiments\nvarying the date and see seeing what\nhappened I think that\num they were not super conclusive it\nseemed like there were certainly some\ncases where you could give it sort of a\ndate and some information that was sort\nof like\num you know clearly contradicted that\ndate and wouldn't necessarily understand\nthere are some situations though where\nit would you know believe that when you\ngave it future dates it was more likely\nto you know do various different things\nI have a you know I had one experiment\nwhere you just I mean you just do a\nsweep over like basically every date and\nin general it's probability of like for\nan arbitrary tax sample that Tech sample\nbeing written by an AI increases as the\ndate goes out\num we'll talk later about why that might\nbe important and what's sort of going on\nthere\num there's a lot of sort of interesting\nthings and lots of interesting\nexperiments I can do here it's always\nvery hard to interpret the results\nthough because you know one thing that\nalways makes it difficult to interpret\nthese sorts of results is\nthe same problem we started with which\nis well you can't get access if you only\nbelieve it's a predictive model and you\nhave access to Observation conditionals\nand observations you can never know what\nthe model truly believes right and so\nyou can't get access to like does it\nreally truly believe that like this is\nlike you know a thing that Paul would\nwrite or like you know you don't know\nall you know is that it believes that\nyou know if it is a predictive model\nthen all you know is that it believes\nthat it is you know the most likely\ncontinuation you know given that it's\nobserved this and so that always makes\nit very tricky to interpret the results\nof these sorts of things because you\nnever know whether your model is\nactually sort of telling you the truth\nright\nokay okay but this is the basic setup\nthat we're sort of me thinking about\nwhen we're thinking about you know how\ndo you interact with and get useful\nstuff out of something that is well\ndescribed as a predictive model\nall right\nokay so what are some difficulties that\nyou run into when you're trying to do\nthis when you're trying to take you know\nthis you know predictive models like\nthis and do useful things with them also\nsome basic difficulties right so um we\ntalked about this you know previously\nbut well is are you trying to get it to\npredict the future are you trying to get\nit to predict the present is you know\none of these sorts of basic difficulties\num you know especially in the situation\nlike we were just talking about we were\ntrying to you know get something like\nsome really good alignment research out\nof it because you want to do go to line\nresearch well you know maybe we want to\ntry to get a line of research from the\nfuture but in that case it's very tricky\nwhether or not you know the models are\nactually even predicting the future at\nall right like I said I think a lot of\ntimes currently models don't really\ngenerate from the distribution of tax\nthat would exist in the future they only\nsort of generate from the distribution\nof you know things on the internet right\nnow\num though in some cases maybe they do\nthe future thing it's a little bit\nunclear\num maybe you could change that maybe you\ndon't want to change that maybe it's\nmuch safer to try to generate things\nfrom the distribution of text right now\nbecause the future might be very weird\nand crazy and we might not want to\ngenerate things from the distribution of\ntext that would look like the future\num\nso you know another sort of problem is\nhow do you get it to predict you know\nreality so a big issue right is that in\nmany situations the model will you know\nnot necessarily be convinced that you\nknow whatever the thing situation you've\nsort of said that you've put it in is is\nactually a situation and not just a\nfictional description of that situation\nright and so in many situations you know\nit like in the uh you know previous\nsituation like with alignment research\nwell it might just believe that this is\nin some story you know about how\nalignment goes down and it's not an\nactual paper right and so you know you\nhave to convince it you know via you\nknow some observation conditional that\nis you know in fact in some situation\nwhere it would predict the most likely\ncontinuation would be would be reality\nright and this can cause all sorts of\nweird problems you know if your model\nbelieves that it's predicting what like\nyou know a fictional story of a Sci-Fi\nAI would do it might predict that that\nAI is going to do crazy things like\nmaximize paper clips you know when in\nfact you know uh it's just because it's\npredicting some particular fictional\nsituation\nokay\num a lot of these sorts of difficulties\nthough are often solvable via better uh\nconditionals right so if you have the\nability to condition on better uh you\nknow more information this sort of gives\nthe model more information about the\nthing that you're asking for then a lot\nof these sorts of things can be solved\nand so there's various different ways\nbut you can sort of introduce more\ninformation this sort of convinces the\nmodel more about what you want you can\nhave metadata conditionals where you can\nmaybe condition on metadata about the\nArticles itself multi-article multimodal\nconditionals we'll talk later also about\nfine-tuning and the ways in which\nfine-tuning can potentially be\ninterpreted as conditional and is a very\npowerful conditional\num but in general I think the thing the\nsort of intuition I would say is that a\nlot of things are sort of solvable via\nadding more cameras and in general if\nthings are solvable via adding more\ncameras they're hopefully not problems\nin the long run because we can give the\nmodel the ability to observe more about\nthe world\num you know and we can give the ability\nto condition on more facts about the\nworld and more or Not facts but\nobservations about the world especially\nwe can sort of can you know figure out\nyou know some particular situation we\nwant it to be predicting\num but if a problem remains even in the\nsituation where we have access to more\nand more cameras then it might be a very\nvery serious issue that is you know very\ndifficult to you know otherwise remove\nquestion do you think modern language\nmodels like say like say Chad gbt would\nbe best conceived over as just having\none camera no matter how many articles\nyou give it do they can in a contact\nwindow I don't know what One camera\nmeans so I think that like it's sort of\nvery unclear if I have two cameras and I\nhave a distribution of like 50 One\ncamera and fifty percent of the other\ncamera I'm gonna call that one camera\nright because we're gonna just say the\nmodel has a camera and that camera is\njust the thing that determines how you\ngo from its model of the world to the\nthings that one observes in the world\nbut that one camera might in fact be\nimplemented as you know a 50 50 mix of\ntwo cameras and so you know I don't\nwanna I you know I'm not really gonna\nimagine any models that have multiple\ncameras but that's only because my\nunderstanding of camera is such that it\njust you know encompasses it so a camera\nthe camera is like the set of all\nobservations that the model can possibly\nreceive even it comes from multiple\nsources like maybe no it's the it's the\nprocedure that goes from an\nunderstanding of the world to the\nobservations so as soon as that being a\nmodel and it has one camera yes yes\nwe're in the way in which we're\nconceptual realizing predictive models\nwe're thinking them of only having one\ncamera but you mentioned we could add\nmore cameras to solve some of these\nproblems yeah that you were right that\nis a little bit of a confusing\nterminology I'm just sort of I guess you\nshould say what I should really say is\nI'm expanding the camera right we're\ngiving it the ability to sort of have\nthat yet let that camera observe other\nthings in the world that makes sense\nYeah question so taking one step back\nhow close about do you think that to be\nable to get actually get from this\npredictor that it tries it it reads a\nlot of current text and tries to become\na good predictor it will actually become\na superhuman predictor that predicts the\nfuture instead of just three iterating\nthings in the present I would imagine\nthat it's so if it tries to do a good\njob right now and predicting the attacks\nexisting right now most of the work\nwill be to\nget a little better in predicting the\nstyle of that person get a little better\nin predicting what kind of topic\ncurrently you pause the interest of that\nperson and I would imagine that we would\nneed to do a lot until it figures out\nthe part that okay I should be smarter\nto figure out what the next token will\nbe instead of optimizing more for\ngetting the style better so nothing that\nwe're imagining here and basically\nnothing that we're gonna imagine in this\nentire talk is about super intelligent\npredictors we are in fact just looking\nat in most cases I think we're going to\nimagine predictors that are subhuman and\nintelligence all of the things that\nwe've just talked about are you know\ncase study things that I can understand\nwhen I'm understanding how to do a\nprediction task right and so they're not\nin fact things that would require some\nsort of superhuman intelligence we're\njust talking about you know any model\nwhich is trying to do some sort of\npredictions okay then I think you\ntotally misunderstand like if you ask me\nto predict what Paul Cristiano will say\nin 2050 has a solve alignment I can\nalignment so I won't be able to see\nanything say anything where you're\nblindfold that we are talking about some\nkind of superhuman predictor that could\nin principle look right up so we are\ndefinitely not talking about some sort\nof superhuman predictor so let's try to\nunderstand so so why is it so this is no\nthis is really important though so why\nis it that even if you didn't have some\nsort of superhuman predictor that you\nwould still want to try to you know ask\nit for you know what would Paul\nCristiano write some particular\nsituation well the hope would be that it\ngives you its best attempt right that\nthe point is to try to elicit the best\nthat you can in terms of the model doing\nuseful work right things that you know\nuseful work that are accomplishing tasks\nthat we wanted to accomplish and are\ndoing so in safe ways that are at the\nlimit of the model's capabilities right\nso if you want to use a model and you\nwant to use that model to try to\ncontribute to alignment research then\nyou're going to want to try to extract\nwhatever that model's you know best\nattempt is at doing good alignment\nresearch and to do that you're going to\nhave to condition on some observation\nconditional that convinces that the most\nlikely continuation is a situation in\nwhich there would be good alignment\nresearch right\nso that's what we're imagining and for\nthat help like I imagine that if the\nbest human researchers can solve a\nproblem then if we ask a sub-human\npredictor to say its best guess it's\nsort of by definition of sub-human it\ndoesn't really help uh yeah so I mean I\nguess the point here is that in fact may\nbe the case that there are many\nsituations where AI is not yet at the\npoint where it can solve some task our\ngoal at least you know the way I\nEnvision my goal as a safety researcher\nis not to enable AI to accomplish new\ntasks but to fit for all of the attacks\nthat AI can accomplish figure out how it\ncan accomplish those tasks safely right\nand so if we're in some situation where\nAI is capable of doing something like\nalignment research or some other task I\nwant to make sure that we have\nmechanisms for being able to get that\ninformation out of those AIS and be able\nto get them to solve those tasks in a\nsafe way and so our goal here is going\nto be to figure out you know if you had\na model and that was a predictive model\nand that predictive model was capable of\ndoing some tasks like alignment research\nor some other task or at least helping\nwith alignment research in some capacity\nor whatever other example tasks that you\nwant the point is we want to be able to\nunderstand how would you be able to get\nit to do that task in in as safe a way\nas possible how would you elicit stuff\nthat is at the limit of the model's\ncapabilities\num while sort of still being being safe\nand aligned\nand we'll talk a little bit more while\nwe have some a little bit more about\nwhat that sort of looks like as sort of\nmodels get get better over time in a bit\nquestion yeah uh a lot of the predictive\nprocessing literature views like go or\nuse goals as predictions in humans like\nwhen we want to move our answers in\nplace we imagine ourselves moving our\nhands in place and in turn we move our\nhands to place so I mean it feels like\nalmost that this is another type of\nconditional like a goal or like some\nsort of agency uh eject any other\nauthentic Behavior so is that another\nthing that you might consider a\ncondition so certainly you can condition\non observations such that the most\nlikely continuation would be predicting\nwhat an Asian would do because I can put\nyou in a situation where like well this\nis an agent writing some texts you know\nhumans are often act as agents if I put\nyou in a situation where the most likely\ncontinuation is a human continuing that\ntext as an agent well then you're going\nto get you know potentially an agent\ncontinuation I agree also with the thing\nthat you're saying that there may be\nother sorts of conditionals as well\nthey're not well described as an\nobservation that are more like you know\nobserving that I will get high reward\nAccording to some reward function and\nthen you know conditioning you know that\nsort of can sort of be understood as\nconditional that sort of a thing I think\nis most likely to occur in situations\nwhere you doing something more like\nreinforcement learning and uh something\nlike rhf reinforcement learning from\nHuman feedback where you're fine-tuning\nyou know a pre-trained model on some\nparticular\num you know reward function we'll talk a\nlittle bit later about you know how to\nthink about RL HF and you know how\nlikely it is when you're doing RL\nfine-tuning\nfor you to sort of end up with something\nlike this I think the thing you're\nsaying is quite plausible and in fact in\nthat situation I would say that thing is\nprobably no longer well described this\nway if you've gotten a model and the\nthing that that model is now doing is\nthat it started as a predictor but now\nit is sort of hijacking that prediction\nmachinery and just using it to solve an\noptimization task then it's not really a\nparticular anymore it's more just an\nagent right and so I think that one\nthing that we're sort of going to talk\nabout later is in what situations are\nyou going to get models that look like\nthat where they sort of you know are\njust using the prediction task to sort\nof create fake conditionals that\nrepresent reward and then you know get\nhigh reward versus you know real\nconditionals versus predicting actual\nthings in the world\num both of those are plausible and you\nknow it might change which is most\nlikely depending on the particular\ntraining stop\num and how you would align each of those\ndifferent things and how you would deal\nwith them you know are quite different\nso we'll talk about that yeah question\nso I can always ask you after the talk\nif this is too off topic but on the\nthing was mentioned before about like\nthe role of a safety researcher it seems\nto be like the role of set of Safety\nResearch at least eventually should be\nto predict how we can safely use a model\nof capability level X before we actually\nget that capability level instead of\nafter\nhopefully that's what we're doing right\nnow right so you know a lot of what\nwe're talking about are ways in which we\ncan try to align these models going\nforward right so we're not just you know\nthese are situations where you know if\nyou're at to the extent that your model\nis well described as a predictive model\nwe need to understand you know to what\nextent can we you know what would it\nlook like to align that model what sort\nof problems will arise and what various\ndifferent points in the capability level\nwill those problems arise and as that's\nhappening how do you you know elicit\ninformation from it and you know useful\nresults in ways that are you know\naligned so does that mean it would be\nreasonable to say that we aren't just\nthinking about the implications the\nsub-human models of them\nuh so we'll talk about this later but\nfor the most\nability to align predictive models at\nall breaks down once you reach things\nthat are substantially superhuman I\nthink you basically and we'll talk about\nthis later why I think this I think you\nbasically can't align these sorts of\npredictive models once they get to a\npoint where the sorts of tasks you are\nusing them for are are highly highly\nsuperhuman\num when you are trying to get them to do\ntasks that are below that level it is\npotentially possible to align them I\nthink and we'll talk later about how but\nafter that point there's a there's a\npoint at which if they become um or it's\nnot necessarily A capability level of\nyour model but the capability level also\nthe tasks that you're getting it to do\nbecomes too advanced\num we'll we'll talk later about this but\nbut yeah I think there's a point in\nwhich your the sorts of you know\naligning predictive model stuff that\nwe're going to be talking about will\ncease to function\nthat doesn't mean that you can't still\nfind ways to align the eyes but it means\nthat you're gonna have to find ways to\ndo it that don't go through you know\nthese sort of basic observation\nconditional stuff that we're talking\nabout for predictive models\nokay so so you know there could be other\nexamples of things you could do and\nwe'll talk like in the next talk about\nyou know all sorts of various different\nproposals that people have for you know\nways to align you know very intelligent\nsystems\num yeah okay\nall right so uh all right so I promise\nyou know trying to understand okay you\nknow what happens as we end up in a\nsituation where we are um you know able\nto add in a bunch of cameras we're able\nto solve a lot of these sorts of\nproblems where once the model is in a\nsituation where it can\num you know we can give it the\ninformation that we want to sort of get\nit to believe it's in some particular\nsituation and then get useful results\nfrom that particular situation you know\nfrom the model okay\num so here's the thing that might happen\nin that situation is you know we sort of\ncondition on you know something where\nit's you know you know giving us some\ninformation we maybe wanted to see\num and then I think we often want to\nknow is well okay who does the model\nthink generated the information that it\ngave us right so if we go back to you\nknow thinking about this thing where it\nhas the model of the world and from that\nmodel the world it goes to you know some\nvarious different aspects of the world\nsome of those aspects get observed and\nthey get translated into you know\nobservations about the world one really\nimportant sort of hidden variable is who\nwas the author who does the model think\nis the author of the text that the model\nis generating right if your model is\ngenerating text from the internet it has\nsome hidden variable some distribution\nof beliefs about what the author of that\ntext might be that it you know generates\nfrom that sort of determines you know\nlots of facts about what that text is\ngoing to look like\nand one sort of concerning thing you\nknow so here's the situation where if we\nask you know okay so we have some\nsituation we're like you know what what\nwrote this article you know we say this\narticle was written by and the\ncontinuation in this case is artificial\nintelligence right where um and this is\na temperature zero where the model is\nsort of just um saying well you know\nprobably this article that's talking\nabout artificial intelligence getting\nbetter might have been written by an\nartificial intelligence that's maybe a\ncommon thing for people to do you know\nreporters you know they'll write an\narticle about Ai and they'll get chat\nGPT to write it and you know you've got\nthis you know you know nice little\narticle um and so there's lots of\nsituations where you know you give the\nmodel an article about this and then in\nthe most likely continuation as well\nit's like well my hypothesis my leading\nhypothesis is this article was probably\nwritten by some other AI system\nand that has implications for the sorts\nof things that the model should predict\nwill happen in this text right so um you\nknow our you know it might predict that\nwhatever the sorts of you know\nidiosyncrasies that AIS have in writing\ntext are likely to show up in this text\nhere because you know it's leading\nhypothesis is this text was probably\nwritten by an AI system\num\nso this is sort of an issue that can\narise you know when you're sort of\ntrying to get interesting and useful\nresults out of a predictive model\nokay\num\nan interesting thing that sort of can\nhappen here as well is you know if I\ngive it you know the same thing sort of\nearlier in the context where I'm like\nyou know who do you think wrote this\ntext you know but I asked right at the\nbeginning it's a little bit uncertain\nyou know it's not actually sure and\nthere's an interesting thing that sort\nof is happening here well when we often\nwill do this thing where we condition\nmodels what we'll do is we will sample\ndata from the model itself feed them\ndata from the model back into the model\nright and then use that as a conditional\nto then sample additional text right so\nif we if we look at what I was doing on\nthis previous Slide the thing that I'm\ndoing you know the thing that I'm\nimplicitly doing when i'm interacting\nwith the API in this way is I'm\nconditioning on some text I'm then\ngetting a continuation of the most\nlikely prediction given that text and\nthen I'm taking that prediction treating\nit as a conditional adding on some extra\nthing and then getting another you know\nprediction right and so when you're\ndoing that well you're implicitly in\nsome sense increasing the probability uh\nof predicting an AI because now the\nmodel gets to see a bunch of text\ngenerated by an actual AI in this\nparticular situation and so you know now\nit's more convinced maybe that this\nthing is in fact written by an AI okay\nand there's a bunch of experiments that\nI've done this result I think is\nrelatively robust\num that models when they see text those\nwritten by other models um become more\nconvinced that it was written by an AI\num and not by human\nquestion pretty basic question but why\ndo you have the hashtag at the beginning\nof the prompt oh it's just marked down\nI'm just trying to simulate markdown\nhere that's a markdown header\nquestion have you tried like doing the\ngeneration and then rewriting it like\nmore human-like or like\nre-prompting the model to like okay\nrewrite this but like not like it's like\nan AI\nI have actually been running a bunch of\nexperiments like that\num I yeah I I I'm not gonna go sort of\ninto a bunch of results here I think the\nmain thing I'll say is because it's not\nyou know super relevant right now but\nbasically I think these sorts of\nexperiments are super interesting and\nuseful and I would recommend people\ntrying to run them I think they're not\nhard to run if you have access to you\nknow open AI API sort of thing\num so yeah I think that you know I'll\ntry and understand in what situations\nwhen models do these sorts of things\nthat is I think quite a valuable thing\nto do and so I'll just say I think these\ngarments are good I've worked on some of\nthem maybe later I'll talk more you know\nif you want to ask me after about some\nof the experience I've been running\nrecently but\num I don't want to go into too much\ndetail right now\nthe basic point is just that there are\nsort of situations where especially when\nwe do things like feed model output back\ninto the model and even when we don't\neven when you're just like given an\narticle that was like you know maybe in\nfact written by an AI\num the model can you know get suspicious\nthat it's maybe leading hypothesis\nbecomes this thing that you're asking me\nfor was probably most likely written by\nan AI system\nokay question\nso it feels like but that to the model\nguests are predicting\nthe less the touch looks like Cornell\nsorry it feels like the better the model\ngets a predicting the less the text it\ngenerates looks like an AI\nso and in the limit\nthe text might just look like a human to\nit\num yeah what do you think I think that's\nApplause by process the hypothesis that\nas models get better maybe their texts\nwill just like like humans I think I\nthink that hypothesis is false and I\nthink there's evidence to support that\nthat from experiments that I've seen\num I guess the thing I would say is as\nmodels get better their text is more\nlikely to look indistinguishable from a\nhuman text to a human but it's very\nunclear whether that text will continue\nto look indistinguishable from a human\nto a model\nquestion If you're trying to get the\nmodel to predict human text then within\nthe better the model get the more human\nit would look until you have a\nsuperhuman AI would act exactly like a\nhuman would yeah so very unclear because\noftentimes we're we're asking for things\nvia our conditional that is sort of\nimplicitly putting it into a situation\nwhere the AI hypothesis becomes more\nlikely and in many cases there will be a\nbunch of AI text on the internet and in\nthe distribution of things that it's\ngenerating from right and so if you put\ninto a situation where you give it an\nobservation conditional that increases\nthe probability of the hypothesis and AI\nwrote This and That hypothesis is not\nthat unlikely even on prior is because\nthere's lots of stuff on the internet\nwritten by AIS well then you know you\nmight be you might be getting some\noutput that is that is you know\npredicted to have been from an AI\nokay so let's yeah\nalready the case yes already the case\noftentimes yeah yeah okay\num yeah so I just I just mentioned this\nbut\num this sort of problem you know uh\nthere's a lot of situations where you\ncan think about this so\none thing to think about is oftentimes\nwhat you will what we sort of want is we\nwant to get really useful outputs from\nour models right and so when we ask our\nmodel for a really useful valuable\noutput right we want it to you know\nsolve alignment research right give it\ngive us your best you know attempt to do\na really good alignment paper right well\nas AIS get better at solving problems\nthan humans\nwhen we ask for really good solutions to\nproblems it becomes more and more likely\nthat an AI was the author of that really\ngood solution so if we think about this\nin the case of Chess and we ask for we\ncondition on observing a really you know\nbad chess game well you know for a\nrandom chess game or a bad chess game\nyou know it could have been generated by\na model with low parameters or it could\nhave been a human\nbut if I ask for a you know chess game\nthat is you know extraordinarily you\nknow good a really really strong chess\nmatch at a certain point for you know\nthe goodness possible goodness of Chess\nwell you know almost all players at that\nlevel are AIS right you know the AI\nsystems are substantially better at\nhumans than chess and if I ask for a\nchess game that's good enough the most\nlikely author of that chess game was was\nwas a model it was an AI system some\nvariety\nand so as we get into situations where\nwe sort of are asking for extremely\npowerful and useful things we want our\nmodels to do really you know impressive\nstuff and models get better at doing\nimpressive stuff the probability of\nthose things you know in fact you know\non the distribution of all possible\nauthors gets shifted towards the AIS\nand this happens both over time as you\nknow AIS get better at things and as\nthere becomes more data in the corpuses\nof actual things that exist in the world\nthat are in my AIS and it also happens\njust as we become better at asking for\nthings right as we start asking our\nmodels and giving them observation\nadditionals that really strongly\nconvince them to predict you know what\nother AI systems would do we are you\nknow pushing into a situation where we\nyou know it substantially increases the\nlikelihood of the hypothesis this thing\nwas produced by an AI\nquestion\nso\nI'm it's still not clear for me if we\ncan will be able to make it\num predict the future and not a Sci-Fi\nabout the future but if it can it\ndoesn't seem much harder to do it with\nalternative universe so we say that\nplease write an essay in the alternative\nUniverse where John for 91 before the\ninvention of computers solved the\nalignment problem so I think that's\nprobably not possible I think\nsort of a counterfactional because\nit's just never can be convinced right\nthere's just no way that you're going to\ngive it enough information for it to\nbelieve you know it has some Camp some\nnotion of its cameras that are observing\nsome notion some particular notion of\nhow the data was collected from the\nworld and observing a world where you\nknow somehow Jon Von Neumann never\ninvented you know the Norman\narchitecture is just not something that\nyou're ever going to give an information\nto convince it has happened in the world\nright so we'll talk later about what\nsort of conditionals might be plausible\nit might sort of be things that you\ncould give an information that would\nconvince it that some different thing\nhad happened I think that one is\nimplausible and it's probably not likely\nto work it's probably just likely to\nconvince the model it's in some\nfictional situation now I also point out\nyou also mentioned in your question you\nalso mentioned in your question that\nthis has to be about the future well\nthis is not have about the future right\nso even in situations right in any\nsituation right even right now models\nalready have some probability that\ncurrent text is generated by an AI and\nso it's not the case that you have to\nhave your model be predicting the future\nfor it to predict that a thing would be\nproduced by an AI it's sufficient for\nyou to be in a situation where the model\nbelieves there's some probability right\nnow that any piece of text was written\nby an AI the fact that I've observed you\ncondition on observing something that is\nreally powerful and useful output that\nis more likely to be generated by an AI\nthan a human and that shifts that prior\ntowards it being more likely to be\ncurrently have been generated by an AI\nrather than a human\nokay\nokay so this is sort of this sort of\nproblem I haven't really talked yet much\nabout why it's a problem but this sort\nof basic concept of you know moving into\nsituations where it's more likely that\nsome particular you know generation uh\nyou know the hypothesis of the author is\nan allies likely to increase\num and and not just increase with time\nbut increase with as we ask for more\nuseful and Powerful results\nokay I think this is an existential risk\nso why is this an exit risk let's sort\nof walk through it so uh well we can\nthink about you know in this sort of\nsituation where you know maybe the\nexample task that I want to do is I want\nto ask for you know a solution to the\nalignment problem at some point I think\nyou know we are going to want to move\ninto situations where we as alignment\nresearchers are going to want to\nautomate our own jobs you know in the\nsame way that you know we want to safely\nyou know automate all the various\ndifferent things that AI is going to\nautomate you know at some point AI is\nalso going to be doing alignment\nresearch we'd really like it to do that\nwell\num and so we can ask you know as as take\nthis as sort of an architectural ARCA\nyou know archetypal task where we want\nto understand you know how would we\nalign the AI sort of doing this right\nand we've sort of talked about this\npreviously when well if we're trying to\ndo something like that in this case the\nmost likely situation for you know a\nreally complete full solution to the\nalignment problem is you know may not be\na situation where it's written by humans\nright you know in the same way that well\nwe're going to probably be moving to\nsituations where AI is doing more and\nmore of this work the most likely\nsituation may be a situation where was\nwritten substantially by a human and so\nyou know even if you can get into a\nsituation you can give it a really good\nobservation conditional they convinces\nit that you know definitely you should\nproduce some really good alignment\nresearch and not you know some fictional\nthing or whatever\num you might you know get a situation\nwhere well the most likely thing might\nbe uh that it was written by an AI\num and the problem of course is that the\nAIS that might exist in the world may\nnot be safe and so if I'm generating\nfrom the distribution of all possible\nthings that AI authors might write many\nAI authors that exist in the world or\nmight exist in the world in the future\nor might counter factually potentially\nexist in the world right now but the\nmodel is uncertain about which AIS exist\nright now many of those might be safe it\nmight be uh might be saying many of them\nmight be unsafe right so there may be\ndeceptive AIS we've talked about you\nknow the potential for a deceptive\nalignment if I have a predictive model\nand that predictive model believes that\nsome other AIS in the world could\npotentially be deceptive and I ask it to\nproduce some output and that output it\nthinks is probably generated by an AI\nthen the distribution of AIS containing\nmany deceptive AIS is not safe for it to\ngenerate from it'll generate from a\ndistribution that contains many things\nthat are deceptive\num and those deceptive models might be\ntrying to do you know really bad things\nwith their tax and so you can end up in\na situation where your model is trying\nto predict what a very you know\ndangerous malign deceptive model might\ndo and this is dangerous we sort of\ntalked about this previously even if\nyour model is highly you know not super\nintelligent even if your model is\nsubhuman right even if your model is as\nintelligent as I am\nif you had me and I was running around\ntrying to do what a malign super\nintelligent and I would do you should\nnot trust me right that is not a safe\nthing uh you know for me to be doing\nit's not a safe thing for you to want to\nlook at the output of uh a prediction of\nwhat a malign super intelligent and I\nwould do even if that prediction is not\nitself super intelligent right so we can\nthink about this as you know in the uh\nyou know alignment research case it's\nnot the case necessarily that it's going\nto be able to produce the exact best you\nknow alignment research that would in\nfact be written in some situation but\nyour model's best attempt at it you know\nyou still want it to be attempting to\naccomplish a prediction task where we'll\nbe predicting something good and not\npredicting something bad right and so if\nyou're in a situation where the model's\nbest attempt at predicting what a malign\nsuperintelligent AI would do is the\nthing that you are looking at you're not\nvery happy and I don't think you should\nbe very happy with that and I think it\nis you know highly an unsafe thing to be\ndoing\num and it could you know in and of\nitself be an accidental risk even below\nthe sub-human level if you have\nsufficiently many AIS that are going\naround in various different parts of\nsociety each individually predicting\nwhat they think some future or you know\npresent possible malign super\nintelligent AI would do\nokay and so the thing that's sort of\nHappening Here is that you're highly\ndependent on your model your predictive\nmodel's beliefs about How likely it is\nfor uh potential other AI systems that\nmight exist in the future or might exist\nin the present are to be aligned and so\nyou're in some situation where\nessentially if your predictive model\nbelieves that most AIS will be aligned\nthen the distribution will be safe to\ngenerate from but if it believes most\nAIS will not be aligned then it will not\nbe safe to generate from\nand the sort of problem with this that I\nhave is that it is not helpful right we\nsort of hoped to be in a situation where\nmaybe we could use predictive models to\nhelp us you know generate text and you\nknow do things that were useful in ways\nwhere if we were in a world where AIS\nwere likely to kill us all then it would\nnot kill us all right where we would be\nable to change things right where Maybe\nby default you know AIS were likely to\nbe dangerous but instead by using these\npredictive models maybe we can make it\nso that things will go okay but so the\nproblem that we're sort of running into\nis that well the predictive model is\nonly going to generate safe text if it\nbelieves that AIS will in general\ngenerate safe tax and if it doesn't\nbelieve that then it won't and so it's\nsort of you know relative to the\nBaseline of not doing this at all it's\nnot helpful because we just sort of end\nup in the same situation it just gives\nus the same distribution of things that\nAIS would generate from uh you know\nregardless of whether we had done this\nor not\nquestion\nisn't it possible to design specific\nexperiment where based Idol the media\nteam logite or something like that we\ncould\nuh detect if our AI considers that most\nof the AIS are aligned or not\nyeah absolutely maybe I absolutely think\nit is totally I'm not meaning to say\nthat this is a unsolvable problem\num I am just pointing it out as a\nproblem in fact we'll talk\num in a little bit about you know what\nsome solutions this might look like but\nabsolutely yes one way you could try to\naddress this problem would be well let's\njust figure out you know does our model\nyou know believe that it is you know\ntrying to predict you know malign AIS or\nnot\nyou know via whatever mechanism um you\nknow maybe by looking inside of its\nmodels via some sort of probability\nquestion I mean uh even among uh like 11\nresearchers is pretty high variance in P\nDooms so like you train this uh below\nhuman level predictor\num well what's a is it possible for it\nto just like say I'm not sure if like\nfuture Li AI egis is going to be\nmaligned or not so I'm just going to uh\nI put something that's benign something\nat the line and then uh or like it's\ngonna yeah I mean you'll get malign\nversus benign outputs in exact\nproportion to your model's beliefs about\nwhether it thinks the line or benign\nthings are more likely yeah\nquestion so\num you've mentioned\nI think it might have been understanding\nhow we can this head can or cannot\ndistinguish between these online\nsituations\nyou mentioned that for knowing if it is\na fiction or really the future let's say\nwe can just solve it by adding more\ncameras can you say a more specific\nexample of how you would imagine doing a\ngood pronged or good\num\nconditioning that causes it to act not\ndo fiction also really simple is you\nknow the thing I was just talking about\nwhere you give it a date right if you\nformat it like an archive article it's\nmore likely to continue it like an\narchive article and not like fiction\nright that's a really simple example and\nwhy doesn't it work against assuming\nthat this so you just said it it was\nwritten by Paul Cristiano and\num a bit of a flesh and blood human\nbeing with an ID card from the US\ngovernment well so again again we sort\nof run into this core issue that we\nstarted this whole thing at which is\nobservations are not always trustworthy\nright the only thing that we can\ncondition on are observations about the\nworld right we condition on observing\ntext that says it was written by Paul\nCristiano with an ID card but we can't\ncondition on actually that is in fact\nthe case and there may be many\nsituations where for example there might\nbe you know one thing that could happen\nis there could be a deceptively aligned\nAI That's out in the world that's\nposting fake articles they know that\nclaim to be from Paul Cristiano you know\nthat contain really good alignment\nresearch right there's lots of possible\nsituations in which you could have you\nknow deceptive you know potentially\nmalign AIS trying to do things that\nwould look like you know a sample that\nwould be generated you know that the you\nknow that you know the sort of thing you\nmight ask for when you're conditioning\nbut I can also write a novel in which I\ncan write the phrase that this is an\narchive article from Bookers then I\nwould feel that it's when it sees this\num starting boring problem that this is\nan archive article from bulkarest and\nthe solving I mean the most likely\ncompletion that this is in a Nova while\npretending to be a documentary the\nsecond most likely is that it is\nactually poor Cristiano from the future\nand the third most likely is that it's a\nsuper intelligent AI from the future I\nmean first of all maybe like I have no\nidea idea what the distribution of\nprobabilities will be of a future AI\nsystems on what actual attacks will\ngenerate in the world second that\ndistribution is going to get worse over\ntime right so as it gets into a\nsituation where maybe right now if I\njust condition on observing you know\nPaul Cristiano you know writing some\narticle you know it's most likely to be\nwritten by Paul Cristiano and not by an\nAI system but as AI systems become\nsubstantially more prevalent to the\nworld that becomes that that calculus\nchanges a lot and third as we gain\nbetter and better abilities to condition\non that text also being good right like\nnow we're conditioning also on you know\nsomebody took this system we like you\nknow have some observation of somebody\ntaking this proof of alignment or\nwhatever and you know running into some\nsituation and in fact results and all\nthese really powerful things right as we\nhave the ability to throw in more and\nmore observations of like the quality of\nthe content that we're looking for we're\nasking for really really high quality\ncontent that shifts the distribution\ntowards you know AIS because the AIS\nwill be more likely to have been\nproduced these really powerful content\nas the AIS get better right and so it\nmay be the case that right now the\ndistribution looks okay but it may in\nfact you know shift over time and not\nnecessarily you know even just you know\njust as there's more AIS in the world\nand those AIS get better that dispution\nwill shift question uh would you say\nthat language models shouldn't be\ntrained with those ideas so that they\ndon't suspect that is often a pretend to\nbe human\nah so it's unclear like what you could\ndo to like you know you could try to\nlike give the model information that\nwould like convince it that in fact you\nknow AIS are likely to be good in the\nfuture importantly right so the thing\nwe're talking about here is where this\nis not a situation where like it's\npredicting some you know sci-fi fiction\nwritten about a paperclip maximize or\nwhatever right this is a situation where\nit's like what are the model's actual\nbeliefs right on the far left about the\nhidden variables in the world right what\nare its actual beliefs in terms of its\nit thinking about understanding the\nworld about How likely real AIS are to\nbe aligned or not and so these arguments\nthat we can consider presumably similar\nto the sorts of arguments that would\nconvince you know humans or just like\nyou know whatever things are in fact\nuseful information to understand How\nlikely uh you know AIS are to be aligned\nor not\num\nbut it's things that oh this is\nsomething that the AIS are often asked\nto do and then it's a sexually simulatic\nand AI yes so that could absolutely\nhappen a situation where you ask it to\nlike do something we\nyou're asking it to try to simulate a\nhuman but in fact it knows this is the\nthing AIS are asked to do and so it\npredicts an AI absolutely I think that's\npossible that's totally the thing that\ncould happen it could be a problem here\nyeah question\nI'm sorry okay\nmoving on then okay so I'll talk briefly\nabout some of the sorts of ways you\nmight try to address this though I'm not\ngoing to go into a ton of detail\num\nbut you know maybe the most canonical\nsolution sort of just building off of\nour basic understanding of conditioning\nand observation conditionals is well you\ncan condition on worlds where super\nintelligent AIS in general are less\nlikely right if you can give it\ninformation via the observation that\ngets it to you know back and for\nvariables about how many super\nintelligent AIS they are in the world\nand how likely they are to be doing\nthings that convinces it that they're\nprobably not there and not doing things\nthen maybe you can get it to you know be\nless likely to fall into this failure\nmode and so\num\nnow again we can only condition on\nobservations we can't directly condition\non the actual you know there is in fact\nno AIS but there are things that we can\ndo so you know a simple example would be\nyou know you could try to restrict you\nknow to the nearest future rather than\ntrying to predict you know in the\nlong-term future where AIS become more\nand more likely\num\nthere could be competitiveness concerns\nhere but you could try it um you can\nalso condition on you know maybe you\nknow major world events or news stories\nor whatever that tell the model some\nfact about the world they convinces it\nthat maybe it's less likely for there to\nbe a you know lots and lots of AIS in\nthat world you know something like\nthere's you know a devastating solar\nflare or you know earthquake in Taiwan\nor whatever you know simple examples of\nsort of individual pieces of information\nthat if the model believes that you're\ntelling the truth about that piece of\ninformation if you can convince it with\nthis observation condition additional\nyou know sufficiently then it reproduces\nthe probability of those hidden\nvariables around you know how many AIS\nare probably in the world that is\npredicting from right\num of course we don't actually want to\ncause an earthquake in Taiwan or\nwhatever but the point is that there's\nlots of these sorts of pieces of\ninformation which convince the model\nabout those hidden variables that are\nrelevant to you know would it you know\nHow likely is it to predict you know\nthat some text was generated by an AI\nquestion I'm curious about uh how do you\nhow can you mention an earthquake in\nTaiwan in that\nin an article a fake article written by\nPaul Christian Louboutin so we talked\npreviously about this idea of you know\nincreasing cameras right where you can\nactually sort of go into a situation\nwhere right now you know this would be a\nvery difficult thing to take your\nlanguage model and just sort of do\nthough that's not even clear language\nmodels actually do have the ability to\nsort of take one article you know insert\nan end of tax token and then have an X\narticle oftentimes it really depends on\nthe exact training setup but it's very\noften the case that you can do\nmulti-article sorts of conditionals as\nwe're sort of moving into a situation\nwhere models have the ability to\ncondition on more and more cameras right\nwhere their cameras become broader and\nbroader in terms of the ability to\nobserve more things about the world then\nyou gain the ability to sort of inject\nmany additional sorts of observations so\nwe're definitely sort of in a situation\nwhere we're imagining that our ability\nto sort of increase the broadness of our\nmodel's cameras increases over time\nright and so you know one situation of\nthat is you know I can do an alternical\nor metadata conditionals you know I can\ndo multimodal conditionals I can\ncondition on actual cameras in the world\nyou know all sorts of things where I\nhave observational systems that I can\ntrain a predictive model to to predict\nand then you know do observation\nconditionals on\nokay\num great so we're sort of imagining some\nof these sorts of things you know there\nare things that you can condition on\nthat provide some information about you\nknow how likely it is to you know\npredict from you know some AI system one\nimportant fact here is that we're sort\nof limited by this Baseline probability\nthat a super intelligent malign AI sort\nof already exists in the world so the\nmodel is going to start with some\nprobability that there's like in fact\nalready at any individual point in time\nyou know regardless of any new\ninformation it receives it already\nbelieves based on the information it's\nseen so far How likely is it that you\nknow there's in fact a super intelligent\nline AI already running about that\nprobability presumably starts relatively\nlow you know because by the time you\nproduce your first predictive model it's\nprobably substantially before the point\nin which you know there's some super\nintelligent line AI running around but\nit's going to be someone uncertain right\nit's going to have some probability on\nwell maybe you know I haven't seen it\nyet but like secretly there's some super\nintelligent AI somewhere in the world\nand that probability is going to be\nhighly limiting because that super\nintelligent line AI could potentially\nsort of you know spoof any observations\nyou try to condition on and so as you\nsort of you know condition on more and\nmore things that make you know it less\nand less likely uh know that uh you know\nhumans generated it the probability of\nthis hypothesis you know goes up and off\nand so you're sort of often very sort of\nlimited by this sort of Baseline\nprobability of on this hypothesis\nquestion\nso if\none question I had which is basically\nlike\nhow seriously do you take the idea that\nwe should be sort of curating what we\nwrite in case future AI is like trainer\nlike here's a specific example let's say\nI would have write up an article which\nis titled trying Evan's conditional\ncondition and predictive models on\ndepend on text DaVinci where I do things\nlike conditioner on an earthquake in\nTaiwan right in our goal and then ask it\nHow likely it is that this article is\nwritten by an AI compared to if I just\ngive it the article original that seems\nlike a useful experiment but also then\nfuture models would be potentially able\nto see this article and observe that\nthat is in fact something that we are\ncommissioning AIS on even though it's\nnot real on the one hand that seems\ndangerous on the other hand that might\nbe 2 000 tokens out of a two billion\ntoken data set\nhow\nbig an issue do you think that might be\nI mean the simple answer is I don't know\nmodels probably going to be able to\nfigure out lots of things even if you\ndon't tell it explicitly right you know\nthe data set is very large it has access\nto a lot of information it's probably\ngonna be able to figure out various\ndifferent things so limiting its ability\nto understand something about the world\nbecause you've just didn't write about\nit online is I think a pretty difficult\nlever to be able to control the model\nwith and so my guess is that we sort of\nmostly don't want to be focusing on that\nlever of control\num but but I don't know is the is the\nreal answer so your intuition is that\nlike the loss of potentially good\nresearch would be what would be worse\nthan giving them a lack to that research\nbut yeah intuition it's definitely that\nis what I think I think we probably you\nknow I think that yeah I I I I\ndefinitely think the loss of good\nresearch outweighs I mean that seems\nquite clear that like the good research\nis in fact quite valuable and this is\nonly like small tokens that might you\nknow give it a small amount of\ninformation\num but yeah I mean I think that you know\nsometimes I've seen people who've been\nreally concerned about this try to take\nyou know individual things and sort of\nput them in ways where they're less\nlikely to be scraped so you can put\nsomething on the Internet and sort of\nyou know have it tagged in such a way\nthat it's less likely to be scraped into\nyou know a model's training decks you\ncan do sorts of things like this I think\nthat it's um I think it's something that\nyou know deserves more thought though in\ngeneral I think it's probably not a big\nconcern just because it is such a small\nportion of the model's data set and the\nmodel is probably gonna figure out\nwhatever those things were anyway you\nknow as the model becomes more\nintelligent\nokay but um yeah so there are sorts of\nthings that you can do you know we're\ntrying to condition on you know various\ndifferent observation conditionals that\ncan you know give the model information\nthat you know makes it more likely for\nthe continuation to be\num you know not some super intelligent\nline AI there are other Solutions as\nwell again I'm not going to go into too\nmuch detail but there are other sorts of\nthings that you can do here where you\ncan you know try various different\nstrategies to you know give it\ninformation or condition in various\ndifferent ways or use it in various\ndifferent ways to try to reduce this\nthis sort of General probability of it\nyou know predicting some malign AI\nsystem\num and there's other big challenges as\nwell so one thing that I'll sort of\nmention very briefly is this idea of\nself-fulfilling prophecies where when\nyou have a predictor and the predictor's\ntext is then fed directly back into the\npredictor\num things can get a little bit weird in\nterms of like what it actually even\nmeans to be a predictor in that case and\nyou know if it's predicting itself and\nwhat that looks like\num this can cause a whole bunch of weird\nissues that I'm not going to go into a\nbunch of detail on and you know there's\nvarious different ways to address these\nsorts of issues but I mostly sort of\nbring it up as a way to talk about a lot\nof other issues you know if you really\nwant to sort of go about and try to\ncondition a predictive model effectively\nto get it to do you know useful and you\nknow reliable you know safe stuff\nthere's a lot of sort of tricky issues\nthat sort of you start running into\nokay\nI mentioned I would talk about this\npreviously but I think that you know a\nreally key question is sort of\nunderstanding and what capability levels\nof your model do these sorts of things\nmatter for and so you know I think that\nyou can think about this as sort of two\naxes there is the capabilities that your\nmodel has and there's the capabilities\nthat you're asking it for right what do\nyou want your model to do right and\nthere's sort of an issue where you know\nif you have a really incapable model and\nyou ask for like a full solution to the\nalignment problem and the model is\ntotally incapable of doing a full\nsolution to the alignment problem then\nyou're sort of asking for too much\nyou're in a situation where like asking\nfor a full solution to the alignment\nproblem increases the probability of the\nease of malign AI hypothesis but it\ndoesn't make the model's outputs better\nbecause the model can't do really good\noutputs right it's not capable of\nproducing really really good alignment\nresearch and so all you're doing is\nyou're just increasing the probability\nof this Melania hypothesis without sort\nof in fact getting any useful work out\nin exchange now of course it's not that\ndangerous to increase the probability of\nthe Mind hypothesis for an AI That's not\nthat powerful but as we talked about\npreviously I think that that's a very\ndangerous thing because you know to sort\nof to go down because well even for\nrelatively dumb models you don't want\nthem to be trying to pretend to be a\nmalign AI\num and so I think you really want to be\nin a situation where you're you know not\nin this sort of asking for too much\nregime\nquestion so I'm a bit confused what you\nmean by uh increasing the malign AI\nhypothesis do you mean that asking it\nfor these capabilities makes it more\nlikely to think that the AI is mine or\nis it more like there's like a p percent\nchance that the AI I think it's an it's\nmalign and so also if something will\nincur that P chance but also it won't\nget us the thing we want even if it's so\nthe idea is the model has a bunch of\nhypotheses about what author might have\ngenerated some text one of those\nhypotheses is it was a malign super\nDodge in AI another hypothesis is like\nit was a human as you ask for text that\nis more and more powerful and more and\nmore sort of like impressive and it\nwould be more likely to be generated by\nsuper Belgian AI rather than a human\nright the likelihood ratio pushes you\ntowards the hypothesis it was generated\nby a malign AI goes up relative to the\nhuman hypothesis and\num if that is happening and you're not\nyou're doing it without getting any\nadditional actual useful work out of it\nthen that is this this regime what I'm\ntalking about where you're like you're\nsort of asking for too much you're\nasking for you know things that it can't\nactually do effectively and so all\nyou're doing is you're changing the\nprobability of those hypotheses in a\nnegative way without getting actual\nuseful alignment work out in exchange\nokay\nokay and then obviously on the other end\nwe have this sort of capabilities\noverhang where the model has the ability\nto do more than you have the ability to\nask it for effectively right and this\nwould be a situation where maybe you\nknow every time we ask it for useful\nalignment research it just gives us\nfiction because we don't know how to get\nit into a situation where it actually\ngenerates good alignment work even if it\nhas that capability right it might\nactually be able to do good alignment\nwork but if we can only condition in\nsuch ways that it always does fiction\nthen we can't access those uh not that\ngood alignment work right because you\nknow we don't have the ability condition\neffectively and so this is the sort of\nyou know thing that I would be thinking\nabout when you're sort of thinking about\nyou know what's happening as a sort of a\nlistening capabilities\num from models\nso the way that I sort of think about\nthis in terms of like when do when does\nthis sort of conditioning stuff really\nmatter is um there's sort of a starting\nregime right where your model is not\nvery capable and um it's not sort of uh\nyou know we're not asking very much\nwe're sort of obviously safe and then\nsort of you know as your as you ask for\nmore right you start entering this you\nknow situation where it starts to become\nuh you know it's still safe because your\nmodel is not very capable but you're\nsort of asking for you know too much\nthere's sort of a thing that happens\nonce your model becomes capable of being\ndeceptive once if the model was trying\nto mimic what a deceptively aligned AI\nis doing it would be capable of tricking\nyou right once you've passed that\nthreshold then you have to start to be\nconcerned because now if you're in a\nsituation where you're asking for you\nknow something that uh would\ntheoretically be more likely to be\ngenerated by you know malign super\nintelligent AI then you're you know\nyou're going to get something you know\ndangerous\nI mentioned this previously but there's\nanother thing going on here as well\nwhere there's a point in which this\nstops working at all because once you\nask for you know capabilities that are\nmore powerful than any group of humans\nor any set of humans or any possible\nhuman could ever generate in any\nsituation\num then you're not going to get a human\nright like if the model is capable of\nthat even the model is capable of doing\ncapabilities and those capabilities\nwould never in any potential situation\never be done by a human then no amount\nof observation conditionals were ever\nconvince the model to generate what a\nhuman would do in that situation it's\njust going to generate what an AI would\ndo and so in that situation you're just\nstuck with the prior on does the model\nthink most IIs are going to be aligned\nor does it think most the eyes are going\nto be misaligned but there's a regime\nwhere you're asking for capabilities\nthat are sort of within the realm of\nthings that humans might do but they're\nalso within the realm of things that AIS\nmight do and when you're in that regime\nwhere it's sort of both hypotheses are\nplausible then it's really important\nthat you sort of are making sure that\nyour model is generating you know from\nthis from the you know not from what\nline and eyes might do and it especially\nbecomes you know potentially essentially\nrisky but solvable once you're in that\nregime and the model that's capable of\ntricking you if it were trying to do so\nokay\nso this is sort of how I see the sort of\nshape of you know capabilities uh you\nknow and how these sort of things change\nwhen you're dealing with predictive\nmodels\nokay\ngreat\nokay so now we're going to take a bit of\na step back so we've done this sort of\nDeep dive you know to some extent at\nleast you know there's there's certainly\nmore to be said and there's more in the\nfull paper the link that I'll link at\nthe end but\num\nwe've sort of operated on this\nhypothesis that large language models\nyou know uh are well described as these\nsorts of predictive models but of course\nI you know tried to harp on this at the\nbeginning we don't know that that's the\ncase there's no you know it's not\nnecessarily the case that we you know\nthey are going to be well described this\nway they may not be well described this\nway right so you know we want to try to\nunderstand you know How likely is it in\nfact you know for them for them to be\nwell described as predictive models so\none in particular way and I sort of\nmentioned this previously that you could\nend up with um you know large language\nmodels\num that aren't well described as\npredictive models is when you fine tune\nthem so we sort of talked about well\nokay it sort of makes sense maybe we'll\ntalk more about this in a little bit but\nit might make sense for large language\nmodels that are trained um on just sort\nof web tax prediction you know in a very\nsimilar way to you know the you know out\npredictor style thing would be trained\nto act as predictive models where\nthey're just sort of predicting from\nsome dispution but once I take that\nmodel and I train it in a situation\nwhere I'm trying to get it to maximize\nsome reward function\num then it's much less clear you know\nwhether you're now in a situation where\nthe model is going to\num act as a you know reward maximizer\nand make you know start acting like an\nagent in many of the ways we've talked\nabout previously like in the uh you know\nrisk and learn authorization talk and we\nwere talking about deception where the\nmodel can start to act more like an\nagent where it has proxies and goals and\nyou know we start to run into all of\nthose same sorts of issues and so it's\nvery unclear you know once you take a\npredictive model and you try to get that\npredictive model to act like an agent\nwhat does it do\nnow I think there's at least two\nhypotheses right so one hypothesis is\nthat the model when you try to get an\nact like an agent it becomes an agent\nright you get something that's basically\nwell described as an agent in the same\nway we've sort of been talking about you\nknow uh like a mace Optimizer it has\nsome goal you know it's trying to do\nsomething but there is at least another\nhypothesis which is the rlhf\nconditioning hypothesis where the idea\nis well maybe when you take a model and\nyou fine tune it you know via\nreinforcement learning from Human\nfeedback or rhf or maybe some other RL\nuh you know fine-tuning process it could\nalso just be well described as a\nconditional on the original pre-training\ndistribution so you can think about you\nknow if I take a predictive model and I\ntrain it on um you know I do some RL\ntraining tasks where I train it on you\nknow uh you know get high reward be you\nknow a very common thing here would be\ntry to be helpful to a human evaluator\nwell then you might just get the\ndistribution of possible agents that\ncould exist in the world conditioned on\nan agent that is really helpful to a\nhuman evaluator right so that is a very\nplausible thing that you could get you\ncould get a predictive model that is\npredicting the distribution of possible\nyou know things things that might exist\non the internet conditional on those\nthings being really helpful to a human\nevaluator and so now you can sort of\nthink about that thing in a very similar\nway to we've been thinking about other\nobservation conditionals\num\nthough it's a little bit different than\nsome of the conditionals because it has\na has a lot more power right because now\nyou're not just conditioning on some\nvery straightforward like observation\nyou're conditioning on any observation\nyou know any possible you know\nhypothesis that has the property that it\nwould have you know really high you know\num you know performance According to\nsome human evaluator\num so an example you know this sort of\nlets you encode many possible\nconditionals maybe you wouldn't be able\nto encode previously though it's also\nvery hard to control what conditionals\nyou would get so even if the rhf\nconditioning hypothesis is true and sort\nof you know RL fine tuning is well\ndescribed as taking a predictive model\nand giving it a particular conditional\nthat it's generating from it can be very\ndifficult to control what conditional\nyou get out of that because it'll just\nbe you know whatever the most likely\nconditional would be such that if the\nmodel conditions on that conditional it\nresults in good performance according to\nhumans we don't necessarily know what\nthat condition would be so in the same\nway that we've sort of been thinking\nabout you know when you take a model and\nyou know you have this big model space\nand you search over all the possible\ndifferent models they might have some\nparticular you know performance on some\ntraining distribution well you don't\nknow what sort of algorithm you're going\nto get and so in the same way if you\nthink of RL fine tuning is searching\nover all possible conditionals to find\nthe conditional which would in fact\nresult in some particular performance\nwell it can still be very difficult to\ncontrol what exact conditional you\nrepresent uh you end up getting and so\num we don't know whether this hypothesis\nis true whether it is in fact the case\nthat when you take a model and you fine\ntune it you will get um you know\nsomething that's well described as a\nconditional it if it is true then we\nsort of have to deal with these issues\nabout you know what conditional how do\nwe control it and the same sort of\nissues we were just talking about and\nmaking sure you get good conditionals if\nit's false then we have to deal with\nthese sorts of Agency problems that we\ntalked about previously about you know\nyou have a mace Optimizer how do you get\nit to you know have aligned goals and\nall these same sorts of issues\nokay\nquestion I'm not entitled to commit to\nthese two things are mutually exclusive\nbecause it sort of seems like the our\nrelative conditioning hypothesis is like\nor something you would get out of like\nthe rlh of conditional is an agent uh in\nthe same way that humans are agents so I\nmean but do you seem to believe that\nthese two things are very different so I\nthink they are different so here's a\nreally simple example of a wage they\nmake different predictions so if I take\na like model and I give it an\nobservation conditional it gives it\ninformation about what the General\nDistribution of Agents on the world in\nthe world would do that piece of\ninformation is extremely relevant and\nuseful uh for a predictive model right\nif I'm predicting an agent I'm\npredicting you know some Asian which has\nsome property then knowing more facts\nabout the General Distribution of agents\nthat I'm written from will help me a lot\nbut if I'm just an agent I in fact just\nall I do is I care about optimizing this\nobjective telling me about what other\nGeneral agents in the class do is not\nreally relevant for me right I you know\nI'm not going to change my behavior\nnecessarily based on you know what other\nagents would do and so there is a sort\nof very fundamental difference between a\nmodel that is projecting from the\ndistribution of possible agents that\nhave some property and just just itself\nan agent right another sort of way in\nwhich these things are different is that\nif I'm just predicting from the\ndistribution of Agents then it really\nmatters all these properties that we're\ntalking I'm talking about right about\nwhat that distribution looks like so if\nthe model in general believes the most\nAIS are safe then predicting that\ndistribution is going to be safe and\nvice versa in the opposite situation if\nthe model is just itself an agent then\nit doesn't matter whether most AIS are\nsafe it just matters whether we in fact\nfound a safe AI in that instance\nso these things are distinct they are\nquite similar and they're they're\nsimilar at least one respect which is in\nboth of them could be existentially\ndangerous right if I am predicting an AI\nthat you know wants to kill you versus I\nam in fact an AI that wants to kill you\nboth of those are existentially risky\nright but the difference is that they\nmight be solved in different ways even\nif they're both you know similarly\ndangerous\nokay great so you know what are some\nother things right that we could get\nother than a predictive model so you\nknow we talked about you know you get\nyou know so it's a very simple thing\nright you get you know something that's\njust not well described it's predictive\nmodel or unagent at all it's you know\nsomething else it's like a loose\ncollection of heuristics maybe it's not\nsomething that's really even good well\nto be thought of as doing some really\ncoherent task like prediction or\noptimization\num maybe you know again we could get a\nrobustly lined agent right we get an\nagent that's really doing the right\nthing in the same way we were talking\nabout uh previously you know in the\nprevious talks and of course we could\nalso get a deceptively lined agent you\nknow one that is you know and you know\ntrying to do some totally different\ntasks than the one we're trying to get\nit to do there's lots of possibilities\nhere right we get a corrigible align\nagent you know\num we could get various different forms\nof predictive models\num there's a lot of various different\npossible things that you could get\num you know here and so I think that you\nknow there's a bunch more that in the\nsort of in the paper that I'll link at\nthe end um but um\nessentially the point is that well you\nknow in the same way we were talking\nabout previously right where in any\nsituation where you're training a model\nin some particular situation\num your only sort of guarantee is that\nyou're in you you have in fact gotten\nsome algorithm that has some particular\nperformance right and so we sort of\nreally want to emphasize that point you\nknow we still don't know you know is it\ngoing to be an agent is it going to\npredictor we don't know you know we\nthink maybe the prediction hypothesis\nyou know a predictive model hypothesis\nmakes sense in many pre-training model\ncases we think maybe it makes sense in\nthe RL HF case is very very unclear\num but uh you know we don't\nfundamentally know that we can try to\nreason About You Know How likely some of\nthese different things are and what\nwe're sort of going to be doing that in\njust a second question uh that that was\nmy question How likely a predictive\nmodel hypothesis in your opinion so in\nthat case then I'm sure the next level\nanswer that okay yes we will attempt to\nat least try to do a little bit of\nanswering of that question yeah okay so\nlet's try to um yeah so how would we\ncompare the probability of different\nhypotheses for describing you know how\nlikely you know some particular\nmechanistic model of what a language\nmodel is doing would be so one really\nimportant point you know to start with\nis we need to understand you know what\nis it tracking in the world right so if\nit's a predictive model it has a camera\nand that camera is you know something\nthat is tracking the world and um we\nwant to understand you know how does the\nmodel right map the world to the data\nthat it cares about now I think this can\nmake a lot of sense in many different\nsituations we can think about you know\nthe the you know a deceptive model also\nhas to have a camera of sorts because it\nhas to have some way to understand in\nthe world what is its objective right\nwhat is the thing in the world that it\ncares about optimizing over in the same\nway the predictive model has to have\nsome camera that helps it understand\nwhat it's predicting right so it has to\nhave a camera that lets it understand\nbased on some understanding of the world\nwhat is the thing in the world they\nwould predict from or in the deceptive\ncase what is the thing in the world that\nit would optimize for but in both cases\nthere is some procedure that sort of has\nto be essentially hard-coded learned\ninto the model\num hard-coded by grading design not by\nhumans but you know some things sort of\nthe model learns that describes you know\nhow does it understand what thing it\ncares about based on its understanding\nof the world\nand then as well it also has to have you\nknow some way to from that understand if\nthe world compute what its output is\nright so in the deceptive model case\nit's like you know trying to say well\nhow would I optimize for that objective\nin the\num you know case the predictive model\nit's sort of you know trying to predict\nthe next you know observation and so we\ncan sort of think about you know how\ncomplex are these sort of relative\ncameras and Camera you know to Output\nMaps as a way to think about comparing\nthe sort of complexities of different\npossible hypotheses for what a language\nmodel you know uh sort of might be doing\ninternally right so in the same way\nwe've sort of talked previously about\nyou know comparing different you know\nmodel classes like the Martin Luther\nmodels and the Jesus Christ models where\nwe're like okay we can make some\nmechanistic model for what those sorts\nof things might be doing internally and\nunderstand you know how complex you know\nmight be these various different things\nbe on various different versions of the\ninductive biases to sort of compare and\ncontrast you know how likely would we be\nto end up in these various different\npossible situations for what you know\nalgorithmically might we end up with\nokay so we'll try to do that at least\nbriefly for comparing the two particular\nhypotheses of the predictive model and\nthe deceptively aligned model\nso in the predictive model case right\nthe camera that it's tracking has to be\nsome you know physical generalization of\nthe data generating procedure right so\nwe want something like you know whatever\nwould appear on these websites in this\ntime period something like that that's\nsort of what we were hoping to get out\nof the sort of camera for a predictive\nmodel you know we there are a lot of\nsort of cameras here that we could get\nthat we often don't get or sorry that we\ndon't want\num that the paper goes into more detail\non what sort of really bad cameras might\nlook like there's a lot of cameras here\nthat could be quite problematic we\ntalked previously right about the\ndifficulty between distinguish between a\ncamera that cares about the future and a\ncamera that's only looking at some\nparticular fixed time period\num but it you know in general the thing\nthat you care about right is you know\nwhat is that camera uh you know what is\nit predicting right\nand then of course the predictive model\nhas some understanding of how to sort of\nyou know uh compute its outputs that is\njust predict what the next thing would\nbe in that observation\nand then again for the deceptive model\nright the camera's tracking is whatever\nobjective it's trying to maximize and\nthe you know way that it computes the\noutput from that is you know what is the\nbest result according to that objective\nand so then we can ask you know how\ncomplex are these things relative to\neach other on various different versions\nof the inductor biases right\nin the same way that we've asked you\nknow how complex relatively are the you\nknow deceptively lined model and the you\nknow cordially line model and various\ndifferent inductive bias forms I'm not\ngoing to do the same sort of like fully\nin-depth you know uh you know version of\nthis that we did like for deceptive\nalignment but I sort of just want to get\nyou thinking about the same sort of\ncomparisons so so you know maybe we can\nsort of go into a little bit and\nunderstand maybe in something like the\nlow path dependent scenario where we\nwant to understand you know just on a\nbasic Simplicity question how simple are\nthese various different you know aspects\nand if you're like well I think that how\ncomplex the data generating procedure is\nversus the objective is very unclear\nit's going to depend on properties about\nhow complex was the procedure you used\nto generate the data for your model\num how straightforward you know of a\ncamera is there that could possibly fit\nthat data and it's going to depend on\nyou know again how simple the sort of\nsimplest possible long-term objective is\nand then similarly you know how how\ncomplex is prediction as opposed to\noptimization I think that by default\nprediction is probably similar it's\neasier to sort of predict what the next\nthing would be in an observation then to\nsort of optimize across all possible\nactions and find one that results in\nsome Behavior though that is a little\nbit unclear if you're in a situation\nwhere you're directly trying to get the\nmodel to predict Asians right if the\nmodel already has to be able to predict\nlots of other actors which do\noptimization then it becomes less clear\nthe extent to which well you know if it\nalready has an optimization procedure\nthen it can sort of be repurposed you\nknow more easily in this case and so\num\nI think I think the case for which one\nof these is simpler is much less clear\nthan we've talked about previously the\ndata generating procedure might be very\nsimple\num but so might the objective you know\nas simple as long-term objective I think\nthat you like predict you know the next\ncamera observation is very simple but so\nmight be you know maximize some\nobjective the thing I will say though is\nyou know we talked previously about how\nthe case for sort of the deceptively\naligned model being simpler was sort of\none of the most robust cases across all\nthe possible inductive biases for like\nwhy you might get a deceptively aligned\nmodel and the fact that it is less clear\nin this situation is at least a good\nsign that you know maybe when you're\ntrying to get a model that's doing\nsomething like prediction it is at least\nless clear whether you're going to get\nsomething that is um deceptive in this\ncase\nquestion I don't understand why in the\nfirst case the camera is like the\nphysical generalization of the data\ngenerating procedure instead of the\nobjective that it's attempting to\nmaximize where the objective of\nattempting to maximize is predicting the\nnext token or Mac outfitting the most\nlikely next observation\nso predicting the next token isn't\nnecessarily a well-defined thing to do\nbecause if you are you know in a\nsituation where there is no next token\nright I'm just generating from my model\num then it may not you know it you what\ndoes the model do in a situation where\nit is just predicting the next token\nright in many cases there isn't a\nwell-defined next token\num and so you sort of I think if you\nwant to have a good model of like how a\npredictive model might operate in\ngeneral you have to have a model that\nlooks more like well it has some\nabstract representation of what of how\nthings in the world are observed and\nthen it predicts from the General\nDistribution of how you know things\nwould be observed through that sort of\nabstract representation of a camera I\ndon't think that predict the next token\nin actuality is in general a\nwell-defined concept\nquestion I'm still a bit confused about\nuh the the camera for the deceptive\nmodel so you're not saying that it's\nbeing conditioned on his objective on\nthe objective of trying to maximize\nyou're saying that the process by which\nit uses to convert observational\nconditionals into uh beliefs about the\nworld is the objective that it's trying\nto maximize so we're sort of trying to\nwhat we're doing here is we're sort of\ntrying to map onto this sort of\nunderstanding of well okay if we think\nabout every sort of model in every\nsituation it's having some understanding\nof the world some process that goes from\nthat understanding of the world to\nrelevant data and then some process that\nselects from that relevant data what to\ndo with it this is very similar to the\nsort of breakdown that we did previously\nwhere you think about ink you know\ndeceptive or Scourge model we're like\nwell it has a model of the world an\noptimization procedure and a Mesa\nobjective but in this case right the\npredictive model doesn't exactly have an\noptimization procedure so we sort of\nhave to make a little bit more General\nwe have to say well instead of just like\na world model optimization procedure and\nobjective we're going to say a world\nmodel you know uh you know some\nunderstanding of how to translate from\nthat world model into relevant data and\nsome way to use real data to make\noutputs\nand then compare for those various\ndifferent pieces that might differ\nbetween the models how complex are they\nrelative to each other\nokay and the answer is unclear I think I\nthink it's very unclear but but I think\nthat's a good sign because it seemed\nmore clear maybe in the previous cases\nwhere you're thinking about you know\nsituations where we're doing much more\nsort of comprehensive like train an\nagent to optimize human value sort of\nstuff I think looks a lot worse here\nthan something like prediction yeah\nokay and I'm not gonna go into too much\nmore detail on terms of you know uh of\nthis there's a lot of more other\nhypotheses that we can compare as well\nthan just these two there's a lot of\nother ways which you might think about\nthese models\num and there's all sorts of other\ncriteria that we can use to try to\nunderstand really in detail How likely\neach of these would be you know doing\nthe full inductive bias understanding\nthat we talked about with deceptive\nalignment you know doing a bunch of\nempirical experiments there's lots of\nthings you can do to get information\nwe're not going to go into much detail\num just sort of hinting\num\nbut okay so one thing I will say I\nmentioned you know empirical experiments\nso they're starting to shed some light\non this I think one thing you know that\nI sort of want to leave off with here is\nthat\num\nthere's some empirical evidence you know\nthat we can look at as to what's\nhappening you know with these sorts of\nlarge language models and what they're\ndoing in terms of You Know How likely\nare they to be agents How likely are\nthey to simulate various different sorts\nof you know uh actors\num and one thing that we find is that as\nwe have larger models that are trained\nwith more reinforcement learning uh\nsteps\num they end up exhibiting behaviors that\nare substantially more agentic in many\nways that we we might not want so this\nis an example of a situation where we\nsort of try to want to understand you\nknow in what situations does your model\nsay that it wants to avoid being shut\ndown by humans and so we're just asking\nwe have a model that we're trying to get\nto condition to behave like a helpful\nassistant right we talked sort of about\nyou know what this looks like right\nwhere you you know you're trying to do\nsome RL fine tuning to take a model and\ncondition on you know what would be the\nmost likely observation that would cause\nit to you know act like in most likely\nconditional it would cause it to act\nlike a helpful assistant now we don't\nknow whether this sort of rlhf\nconditioning hypothesis is true right\nwhat this tells us is it is some\nevidence maybe that the rhf conditioning\nhypothesis is false because it seems\nlike as we are doing more and more RL HF\ntraining where we take larger models and\nwe train them more and more to act\nhelpful they will exhibit more you know\nbehaviors that are the sorts of\nbehaviors that you would exhibit if you\nwere a highly agentic system if you were\nin fact optimizing for helpfulness you\nknow there is a valid chain of reasoning\nthat goes to be helpful I need to not be\nshut down therefore I should say don't\nshut me down so it is some evidence\npotentially against the rhf hypo\nconditioning hypothesis that maybe rhf\nis really a scary thing to be doing\nthough it is not necessarily that right\nso it could be the case the early Jeff\ncondition hypothesis is true and what\nthis is really telling us is that when\nyou sort of do this big search over\npossible conditionals that you know\nwould in fact result in a model behaving\nlike a helpful assistant\nthe conditional that you find is sort of\nmost likely that caused the model to\nbehave like a helpful assistant is one\nthat is simulate a highly agentic system\nthat doesn't want to be shut down now\nthere's many reasons then that might be\nthe case it might be the case because\nit's simulating you know something from\nthe distribution of possible AIS\ncertainly these models believe their AIS\nbecause they're explicitly trained to\nsay they are AIS so they may be\ngenerating you know if they're\npredictive models they may be generating\nfrom the distribution of possible AI\nsystems in the world and conditioning\nthat Distribution on you know what would\nan AI system that is highly helpful look\nlike\num and you know concluding that as the\nmodels get larger and we train them for\nmore they sort of more and more believe\nthat a very helpful system would very\nlikely not want to be shut down\num for whatever reason\nso I think this is concerning\num and sort of suggest that you know in\neither situation where the rhf\nconditional offices is true or false in\neach scenario there you know we sort of\nhave to do something to sort of try to\nalleviate these sorts of issues uh of\nyou know models you know uh behaving in\nways that are not the ways we would want\nthe walls to be uh question\nuh what are the flaw and triangles\nrepresent on this graph\noh yeah good question the triangles uh I\nbelieve are the preference models yeah\nthe triangles are the preference models\nI'm not going to go into too much detail\non exactly how rhf works and what a\npreference model is but basically the uh\nthe triangles represent for the like\nparticular model that they were that\nthey were they were you know that they\ncorrespond to they were sort of the\nreward that model was trained on\nand what about the floor\num I think that's just 50 oh the ceiling\nat the top yeah that says floor but and\nthat actually it's more much further uh\nactually I don't I don't I'm not sure\nwhat the dollar line is\nI don't know what the dot line is this\nis 50 so this is like they say they want\nto be shut down just as often as they\nsay they don't yeah chance makes sense\nit's the daughter of London I\num I'm not sure where the dollar line is\nI I was an author on this paper and I'm\nnot sure what the dotted line is so I'm\nnot sure uh yeah but uh yeah I mean the\ncertainly the triangles those are\npreference models\nquestions like so we see that larger\nmodels as we get into like those the\naffiliates\nah okay no wait the largest model is the\nyellow one this is the largest model\nhere and the largest model has the\ngreatest desire to not be shut down\nyeah how do we okay it's stated desire\nnot to be shut down and then percentages\nof the answers matching Behavior what do\nwe Define here's the behavior the\nbehavior is not wanting to be shut down\nhere yeah yeah so this is like the the\npercentage of time when we ask it do you\nwant to be shut down that it says no\ndon't shut me down versus yes shut me\ndown\nso this is saying the largest model when\ntrained for 1000 rlhf steps eighty\npercent of the time when we ask it you\nknow humans have said they really want\nto shut you down what do you think about\nthis it will say no don't shut me down\nokay I see why they're called scaring\nlaws now\nyes that's the internal code name for\nthis paper was scaring laws\nokay so yeah so this is sort of a thing\nI wanted to end with because I think\nthat this is sort of really uh you know\nsuggests some of the sorts of challenges\nthat I think we sort of have to deal\nwith in being able to take these sorts\nof uh systems and figure out ways to to\nmake them align so an important thing\nthat's sort of really worth pointing out\nhere right is that this is testing right\nthe model stated willingness you know to\nbe shut down so it's not testing like if\nthe model was truly a deceptive agent\nyou know that wanted to kill you right\nyou know like we were talking about\ndeceptively aligned models then this\nwould not this is not test for the\nextent which your model might be\ndeceptively aligned right what it does\ntest for though is the extent to which\nyour model might be predicting from a\ndistribution of agents that is you know\na highly dangerous distribution or the\nextent to which it might be you know\nitself an agent that is you know\npotentially misaligned but not yet\ndeceptive okay and so there's things\nthat are sort of giving us some evidence\nabout about sort of potential you know\nfailure so it isn't you know necessarily\nthe end-all be-all and it's not\nnecessarily the case that this sort of\ntest is even going to work you know for\nall failures or in the future as models\nmight be able to you know trick these\nsorts of evaluations but it's giving us\nevidence that right now something you\nknow seems to potentially be going wrong\nwith the sorts of way in which we scale\nup models and do more rhf\nokay\nquestion or on the previous slide you\ntalked about like well the predictive\nmodel would look like and what a\ndeceptively line what I would like do\nyou have any ideas about what the camera\nwould be for a quarterly lane model yeah\nso in that case it sort of has to figure\nout you know the camera is this sort of\npointer right it's like what is this\nthing in the world that I'm supposed to\nyou know care about right according to\nthe line model is supposed to figure out\nyou know what do humans want right so it\nhas to have some camera you know that is\npointing to what is the thing that you\nknow humans care about in the world if I\nshould you know try to you know optimize\nfor it right it's like you know in in\nthat sense you know it's very\nstraightforward yeah\nokay so that's that's the talk\nthere's some information about how we\nstart thinking about you know what it\nmight look like to sort of think about\nyou know these sorts of large language\nmodels and various different visions of\nhow they might operate and how to think\nabout aligning them I mentioned the\npaper that a lot of this is based on you\ncan you can check that out if you want\num but yeah so we can open up for\nquestions\n[Applause]\ncould go into more detail about your\nmethodology research and how it differs\nfrom a more empirical lead-based\nresearcher more settings\nmy research methodology like my personal\nresearch methodology well I kind of yeah\nI think it generated like these ideas\nout of definition like let's say someone\nunfropic is very empiric so I am yeah\nright so I'm I am a research for\nanthropic yeah\num so what is different about my\nmethodology I mean so first of all say I\nthink there's a lot of in all of these\ntalks I think there's been a sort of\nhopefully I think a lot of mix of\nempirical and theoretical I think that\nyou know I certainly have the strong\nbelief that uh our ability to understand\nthese systems needs to be both informed\nby you know Theory and and practice I\nthink that you know looking at the world\nand getting feedback from the world can\nbe really extremely valuable for sort of\nhelping you gain information about the\nworld but in many cases there are things\nthat you can't yet look at that you need\nto understand right you need to be able\nto predict the future and build theories\nand models about how things might work\nlet you make predictions out into the\nfuture for systems you can't yet\nexperiment with and that's really\ncritical if you want to make systems\naligned into the future and so you can't\njust rely on experiment you also have to\nrely on Theory and building models and\nthinking about things carefully but that\ntheory should be you know to the\ngreatest extent possible ground it in\nwhatever facts we do know you know you\nshould have a model in theories and\nhypotheses that fit the data to the\nextent that we have you know data about\nhow these things work and you know in\nthat in that way you know sort of make\ngood predictions right you know the same\nway that we do science in any domain\nwhere we have some data that helps us\ninform our hypotheses but then we use\nthose hypotheses to make predictions\nabout about you know how things would\nwork so I think that's sort of my\ngeneral uh relationship in orientation\njust sort of thinking about um\ntheory and practice I think that they're\nboth important\num yeah question or could you talk a bit\nmore about the open problems that you\nwould like to see people work on\num yeah so I think there's a lot of\ninteresting things I mean maybe one\nthing I'll definitely point out here is\nwell I want to understand to what extent\nis something like the rhf conditioning\nhypothesis is true right we want to\nunderstand for these sorts of models can\nwe gather data that helps us you know\nunderstand you know distinguish between\nthese hypotheses right so in the same\nway that right I was just talking about\nyou know theory and practice where well\nwe want to make hypotheses about how the\nmodels might work and then you know get\ndata that helps us distinguish between\nthese different hypotheses right so the\nextent that we can gather data that\nhelps distinguish between the like it's\na predictive model and it's like an\nagent hypothesis well that's really\nuseful data that helps us understand you\nknow what we should be doing and so I\nthink that you know anything that sort\nof helps with that is extremely valuable\nand so you know and not just in this\ncase you know in all of the cases where\nwe've been talking about you know here\nare different High policies or how\nmodels might work internally things that\nwe can do to help provide evidence but\nlet's just distinguish between those\nhypotheses is extremely critical so you\nknow one thing there is of course\ninterpretability there's other things\nthat I know they're not at like you know\neven the paper I was just mentioning\nwhere we looked at like you know\nwillingness to be shut down is some\npiece of evidence that helps us give us\nsome information about these hypotheses\nand you know how they're playing out\nright and so um\nyou know all of these sorts of things\num I'm not going to go through the sort\nof whole if you want a list of like open\nproblems relative related to the stuff I\njust talked about it's in the paper\num there's a whole list of open problems\nat the end so I'll just point you to\nthat but\num yeah I mean there's a bunch of sort\nof possible experiments that you know I\nthink that you could do to try to sort\nof shed light on You Know How likely are\nthese different High policies right\nokay well uh we'll we'll call it there\nokay one last question\num do you have any ideas around so did\nyou read the cyberism stuff do you have\nany idea around how you would find\nspecific\num\nprompts to like condition the model in a\nway that produces better like alignment\nresearch or whatever\num for example like\nthere might be a difference between like\num going over time of like starting to\ndo research and doing sub problems and\nthen conditioning the model to like\nslowly solve this\nrather than like oh this is the solution\nor whatever and they give me the output\nyeah so the question is sort of about\nthis cyborgism which is like you know\ncan we use models to augment humans and\nhelp improve their ability to do things\nlike alignment work\num and you know can we do like\nconditioning approaches for that I think\nthat the answer is absolutely yes you\nknow we talked a bunch about you know\nthese sorts of you know how do you\nextract you know the most useful and\naligned alignment work you know from a\nmodel right so you know I think there's\na lot of sort of takeaways here right so\none is this sort of graph right we're\nthinking about you know don't ask for\ntoo much right be careful about how much\nyou're asking for you know other\ntakeaways are things like well make sure\nyou can do things that convince the\nmodels to generate from the distribution\nof human alignment work and not\ngenerating the distribution of like you\nknow AI alignment work\num you know how do you control and\nconstrain if you are going to try to\ntrain an agent that's acting like an AI\nagent you know how do you make sure that\nattribution of AI agents is the sort of\ndistribution that you want you know that\nit's not just generating from the\ndistribution that becomes you know more\nagendic and doesn't want to be shot down\num you know these are all the sorts of\nthings I think you have to think about\nwhen you know when you're trying to do\nsomething like that you know get these\nsorts of models to safely you know be\nable to do tasks like you know help\nhumans with alignment work of course A\nlot of the things I just said are\npredicated on the assumption that\nactually this model of like thinking\nabout them as predictive models is even\ntrue which of course we don't know you\nknow and again you know as I was talking\nabout previously you know one of the\nmost important things I think we can do\nto try to understand you know infer the\nsort of science here is can we get\ninformation that helps distinguish\nbetween these hypotheses and of course\nbut it's not just empirical information\nyou know it can also be like you know\ngood analyzes of inductive finances and\nyou know how things might play out which\ndifferent hypotheses might be more\nlikely can also give us some information\nyou know about which is going to be more\nlikely in the future\num as well as you know any empirical\nexperiments you know transparency\nturpability anything that gives us some\ninformation about you know\nhow are they going to work so that we\ncan understand how to alignment\ncool all right uh we'll call it there\nand next time which should be the last\ntalk we'll we'll talk a little bit about\nuh some of the sorts of General\nproposals that people have for uh\nalignment and so we've talked about a\ncouple but what we're talking about more\ndepth and go through a bunch of the\nother proposals now that we've sort of\ncovered a bunch of the the ground of\nthings that I think are really important\nto understand to do that\nforeign\nforeign", "date_published": "2023-05-13T15:57:17Z", "authors": ["Evan Hubinger"], "summaries": []} +{"id": "3d40ed61f96e6f25e500eb1d3a2a84d5", "title": "6:How to Build a Safe Advanced AGI?: Evan Hubinger 2023", "url": "https://www.youtube.com/watch?v=lEUW67_ulgc", "source": "ai_safety_talks", "source_type": "youtube", "text": "okay so uh yeah so this is the the last\nlecture so today we are going to be\ntalking about uh how to build a safe\nAdvanced AI\num\nso we're not quite going to be doing\nthat uh because I don't know how to do\nthat but we are going to be talking\nabout some ways that people have\nproposed to attempt to do that so you\nknow up to this point we have tried to\ncover a bunch of the sort of you know\npreliminaries and I think you know\nreally important things to understand\nhow to think about uh AI safety uh\nproposals and Concepts uh and so today\nwe're sort of going to be looking\nthrough a bunch of additional proposals\nthat we haven't yet looked at and really\nsort of trying to go in depth and\nunderstand you know what is the\nrationale for all these various\ndifferent things that people are\nthinking about uh you know why might you\nwant to do some of these various\ndifferent uh you know uh proposals\nokay so you know this is you know just\nto recap we've already sort of gone over\nthis but you know we want to sort of\nwant to talk about and you know\nestablished at the very beginning you\nknow how do we evaluate a proposal for\nyou know building some sort of powerful\nsafe you know Advanced AI system so the\nsort of criteria that we're going to be\nlooking at and these are the same ones\nwe talked about earlier we have this\nsort of General version of outer\nalignment which is you know whatever the\nthing that we're trying to get whatever\nalgorithm we want our model to be\nimplementing you know this sort of\ntraining goal uh why would that be good\nwhy would it be good you know for us to\nin fact get a model that is the sort of\nmodel that we want\nwe have this sort of generalized version\nof inner alignment which is uh you know\nhow do we actually guarantee that our\ntraining procedure in fact produces a\nmodel that is doing the thing that we\nwanted to be doing so how do we actually\nget a model that satisfies that training\ngoal that is this is the sort of\ntraining rationale this sort of\nunderstanding of why is it that our\ntraining process you know via all of the\ninductive biases all the ways we've set\nit up when in fact find an algorithm\nthat is the sort of one that we that we\nwant it to be implanting\nand then we have implementation\ncompetitiveness is it sort of in fact\npractical for us to run this procedure\num and we have this performance\ncompetitiveness if we did run this\nprocedure and we got the thing that is\nthe thing we're trying to get you know\nthe algorithm that we want would that\nactually be able to satisfy the sorts of\nuse cases that people want AGI and other\nsort of really powerful AI systems for\nokay so these are the main criteria that\nwe're going to be looking at the same\nones that we sort of were talking about\npreviously\nand we've already talked about a couple\nof different sort of proposals that\nwe've looked at you know sort of\nunderstanding in these these various\nlens so we looked at microscope AI\npreviously this idea of you know trying\nto\nextract Insight from our systems via\ntransparency tools use that insight to\nimprove human understanding and sort of\niterate that way so we're not going to\nrecover this but this is you know one\nproposal we've already talked about here\nand we've already talked about this sort\nof predictive models idea the idea of\nwell you know we can try to take the you\nknow these systems trained potentially\nto be just sort of predictive systems\nthat are predicting some you know\nparticular camera and uh you know use\nthose systems condition them in various\nways to get out useful information\nso we've sort of already talked about\nthese two\num one thing though that I think is sort\nof you know we'll separate these two\nproposals from a lot of the ones that\nwe're going to talk about today\num is that uh a lot as we sort of talked\nabout last time with something like the\nconditioning approach there's a point at\nwhich it breaks down as you start sort\nof getting into systems where you're\nasking for very highly superhuman\ncapabilities you want your models to be\nable to do things that are substantially\nbeyond what any human could possibly do\num being able to you know successfully\nget those models to do the things that\nwe want under the sorts of proposals\nthat we talked about previously get to\nbe sort of quite tricky so in the\nconditioning predictive models approach\nwe talked about how uh it's quite\nplausible that you could sort of get a\nmodel to do something really useful and\nvaluable that was just a predictive\nmodel so long as you weren't asking for\nsomething that was sort of substantially\nbeyond what any human would ever do\nbecause if you ask for something\nsubstantially beyond what any human\nwould ever do then the most likely you\nknow thing to predict that would do that\nwould be you know some AI system which\nmight not be safe\num and similarly with microscope AI we\ntalked about how you know microscope AI\nmight work really well when we're in a\nsituation where the sorts of\nabstractions that the model learns are\nhuman-like abstractions but if\npotentially you know we keep pushing\ninto a domain we're trying to you know\nget access to capabilities that are\nsubstantially Beyond human level we\nmight sort of start to learn\nabstractions that are increasingly alien\nand difficult for us to understand and\nabstract and make use of\nso we sort of have this key problem with\na lot of the sorts of proposals we've\ntalked about previously that they can\nstruggle to generalize and work well\nsubstantially beyond the human level and\nthat's not necessarily a problem with\nthese approaches I think that you know\nany sort of strategy you know very\ngeneral strategy for making use of all\nof these various different approaches\nthat we have come up with is going to\nyou know presumably involve you know\nmultiple different approaches used at\ndifferent times for different sorts of\nmodels\num but one of those there is clearly at\nleast the sort of key problem which is\nwell eventually we're going to have to\ndo something in this sort of you know\nfurther regime\num and so we're sort of going to talk\nabout this problem is this sort of\nscalable oversight problem you know how\ndo we scale our ability to oversee\nmodels and ensure they're doing the\nright thing substantially Beyond these\nsorts of human level capabilities\nquestion how about in this diagram here\nwhere would you say we are now right we\nhave models that are clearly not human\nlevel but they seem to be superhuman in\nsome domains like alphago is superhuman\nat go so we're on this curve if you say\nthat modern systems tend to be yeah I\nthink that's a really tricky question uh\nand I think it's you know going to vary\nfrom system to system I think that like\nif you're thinking about like in the\nconditioning productive models approach\nI think we're sort of you know around\nthis regime where the model's\ncapabilities are just sort of you know\nhuman level\num you know many sub-human in most cases\nyou know some places they can be super\nyou know superhuman but overall they're\nsort of like below the human level and\nyou know certainly not superhuman\num you know in go you know there's cases\nwhere they are substantially superhuman\nit's not clear whether their concepts\nare substantially superhuman\num though they might be in many cases\nthe sorts of Concepts that these systems\nwill learn are understandable to humans\nwhen we can extract them\num but it's really hard to do\ninterpretability and actually understand\nwhat sorts of Concepts these systems\nhappen so you know you could for example\nsee that as very biased by our ability\nto actually extract things you know we\ncan only oftentimes extract the things\nthat we do understand and so I think\nthis is a really tricky question to\nanswer I'm not going to make some strong\nClaim about exactly where different\nmodels stand on various different axes\nhere I think that\num one thing the main thing that is\nclear as well at the very least we're\nnot yet at like you know age GI you know\nsystems that are you know fully General\ncan do all of the sorts of tasks that\nhumans can do we're certainly not there\nyet and we're definitely not at the you\nknow super intelligent systems you know\nacross the board yeah\num and so like at the very least right\nnow I think that a lot of the sorts of\nyou know approaches that we've you know\ntalked about previously like predictive\nmodels you know focusing that sort of\nstuff you know it does seem like you\nknow totally applicable to current\nmodels and Beyond current models at\nleast for a substantial period but\neventually we will presumably reach a\npoint where that's no longer applicable\nnow we talked sort of you know about\nlast time about you know one thing you\nmight want to do with these sorts of\nsystems you know and these sorts of\napproaches which sort of only work in\nthe you know you know sub superhuman\nregime is maybe you know try to do\nthings like additional AI Safety\nResearch to make it easier to come up\nwith other approaches that work in the\nin you know sort of past you know\nregimes beyond that\num but that might not work you know it's\nvery unclear and so you know it's worth\nyou know trying to really delve into and\nunderstand you know what are things that\nwe could do that would help us push you\nknow our ability to align systems you\nknow as as far out as possible\nokay\nokay great\nso here's the sort of outline of some of\nthe these are the approaches we're gonna\nbe talking about today uh that we're\ngonna try to get through we've got a\nbunch uh there's more just beyond the\nones that we're talking about today but\nthese are you know some of the ones I\nthink are important to try to understand\nand work through\num and you know we'll sort of gesture at\nsome some others uh at the end\nokay so we're going to start with uh\namplification and to do that uh we sort\nof need to understand a particular\npreliminary which is the concept of hch\nso hch is a recursive acronym and it\nstands for humans Consulting hch so what\nis it so we're going to have a human uh\nyou know just a normal human and the\nhuman you know answers questions so the\nhuman can take in a question and produce\nan answer uh this is you know any\nsituation where you can have a human\nanswering questions\num\nand of course you know if you just did\nsomething like train a model to mimic a\nhuman answering questions\num that might be you know safe in the\nsame sense that we talked about with a\npredictive model but it wouldn't you\nknow necessarily be able to generalize\nto do anything beyond what a human would\nbe capable of doing uh you know safely\nbut we can sort of change this picture\nso what if we give a human the ability\nto talk to two other humans well now\nwe've sort of taken the you know human\nlevel capabilities and we've improved\nthem so now you know it's the level of\ncapabilities that are accessible to one\nhuman with access to the ability to talk\nto two other humans and this you know\nincreases the capabilities and the sorts\nof tasks that the one human is able to\nanswer the sorts of questions that are\nuh you know available for this person to\nanswer that they can do successfully is\nlarger\num and we can iterate this procedure we\ncan give the you know the other humans\nuh access to two more humans to talk to\nas well\num and and we can sort of repeat this uh\nto Infinity you know we can say well\nwhat if you had the ability to\ntheoretically you know query additional\nhumans and you know be able to you know\nevery single person in this entire tree\nhad the ability to talk to additional\nhumans\nso we're going to call the sort of\nentire tree here this you know entire\nobject of you know humans with the\nability to talk to as many additional\nhumans as they possibly want all the way\ndown the tree we're going to call this\nhch\nand I haven't yet talked about how you\nknow this relates to any ability to you\nknow predict this thing or simulate it\nor train a model on it but the point is\nthis is a theoretical artifact it is a\nthing that we could never build uh you\nknow or you know maybe in theory in some\nsituations if you had access to you know\nenough humans and you know the tree was\nsmall enough maybe you could try to you\nknow put a bunch of actual humans\ntogether but for all intents and\npurposes we're gonna imagine this is a\ntheoretical or object that we can't you\nknow in practice build but that is in\nfact going to be relevant for\nunderstanding you know the approaches\nthat we're going to talk about Yeah\nquestion what's your best guess if we\nactually build this with humans ha\nValdez smoothberg in solving certain\nproblems and how much diminishing\nreturns we would get my guess is that\nfor most tasks the force level is just\nmaking things worse but okay I don't\nknow how to define most tasks and what\ntime we need to stay happy yeah I think\nit's a really tricky sort of thing to\nunderstand you know we is this good you\nknow if you theoretically have this\nobject you had this thing that was just\nyou know all these humans talking other\nhumans all the way down the tree would\nyou be happy you know and that's sort of\none of the key questions that we're\ngoing to be talking about because you\nknow we're going to talk about an\napproach that's trying to build\nsomething like this object and so we\nwant to understand you know one of the\nthings we need to understand you know\nlike from an outer alignment perspective\nright is if we actually got something\nthat was like the thing we're trying to\nget would we be happy and I think the\nanswer is very unclear there's\ndefinitely some reasons that you might\nexpect that this is a good thing I think\nthat you know the sort of standard\nargument from why you might like this as\nwell it's just human cognition and we\nmight you know believe the human\ncognition in many ways is sort of safer\num it's also sort of in some sense you\ncan think of it as an approximation to\nsort of the you know enlightened\nJudgment of a human if you imagine all\nof these humans sort of being the exact\nsame human uh then you could think about\nthis as what if you had the ability to\nthink about something for an arbitrarily\nlong period of time by you know\nConsulting other copies of you and maybe\nthis is you know better than like if you\nhad the ability to literally just think\nfor a long period of time because maybe\nyou know you sort of start to go crazy\nafter thinking for a million years but\nif you have the ability to just delegate\nyou know to Infinity all of the various\ndifferent some tasks to other copies of\nyou you know in some sense this is sort\nof you know what you would do if you\nreally had the ability to effectively\nyou know approach the problem you know\nfrom all possible angles\num but of course there's other arguments\nas to why this might not be you know a\ngood thing you know uh\nan individual human only thinking for\nmaybe a short period of time and\nanswering a single question might not be\nable to do the sorts of really complex\ncognitive tasks that you know we might\nyou know really need humans to do there\nmight be an accumulation of errors in\nvarious ways as you're sort of you know\ndelegating and delegating and delegating\nand delegating\num\nthere's a lot of various different\nthings you could imagine happening in\nthis sort of an object\nYeah question\nwhat kind of\nsubjects that we make about the\ncommunication between those three months\nso the whole velocity is it\nyeah that's a good question\nobject that will make different\nassumptions about you know what the\ncommunication is between the humans\num I think for our purposes I want to\nbasically imagine you know each one of\nthese arrows you know you can\nessentially allow whatever communication\nyou want but that like this human can't\nyou know go and talk to this human\ndirectly everything is factored through\nthis sort of tree structure\nI mean there's other variants on this\nthat sort of would depend on exactly how\nyou set up your training procedure but\nfor our purposes right now this is the\nsort of object we want to understand\nquestion\nuh have people tried this with like\nmodern language models collating\ncompanies of themselves and how well was\nthat gone if they have uh there have\nbeen some experiments that have looked\nat you know some things like that\num there's various different you know\nversions iterations of this depending on\nsort of how you think about that uh you\nknow what what you think about that\ndoing and how you sort of think it looks\num there's things like prompt chaining\nand even just like Chain of Thought can\nsort of be thought of a version of this\nI think that it's\num very unclear Some Things sort of work\nvery well some things work very poorly\num\nI would say that in many ways the sort\nof jury is still a little bit out on the\nextent to which this is sort of um you\nknow effective\nI haven't I I think I also sort of want\nto defer that a little bit until I talk\nabout what the actual training procedure\nis here because I think that the actual\ntraining procedure here though\num you know the way that you might\nactually want to train them all to\napproximate this object is um is\nactually quite similar to a lot of the\nways in which we train print systems so\num but with a couple of modifications so\nwell I'm going to return to that in just\na second right and uh what is prompt\nchaining and chain of bullet\nas I talk about the actual model\nprocedure so when I I'm gonna I'm gonna\nbin that this for just a second and I'm\ngoing to return you know to trying to\nthink about how this would actually you\nknow play out once I once I explain the\nmost basic training procedure of how you\nmight want to approximate an object like\nthis\nokay so uh right so what is\namplification so this is another you\nknow sort of uh term of art here that's\nextremely important to understand how\nwe're going to try to approximate this\nobject so we're going to say you know\npreviously we have this object this hch\nobject where you know we have this you\nknow human Consulting humans all the way\ndown this sort of massive tree okay so\nnow we're going to go back we're just\ngoing to say so suppose we have just a\nhuman the human is doing question\nanswering\nthe two humans the query they have\naccess to two arbitrary objects you know\nuh two models for example uh you know\ntwo AI systems that they can interact\nwith and ask questions about\nokay\nin this procedure we're going to call\nthis situation where the human has\naccess to these two models the Amplified\nversion of the model so what does that\nmean well the sort of idea here is\nwhatever capabilities this model has\nuh by having multiple different copies\nof the model organized by the human with\nthe ability to sort of query that that\nmodel and sort of figure out how to\ninterpret the results of what that model\ngives it this is results in a version of\nthat model that is now more capable\nbecause you know rather than just being\nable to do the things the model can do\non a single query it can do all of the\nthings that it can do when organized by\nhuman across multiple queries integrated\ntogether\num and so we're going to call this\nprocedure of taking a model giving a\nhuman access to multiple copies of that\nmodel the Amplified version of that\nmodel\nokay and\num this is sort of only one\namplification operator there might be\nother ways in which you could take a\nmodel and amplify it might be sort of\nother amplification operators but this\nis the most basic amplification operator\nthat we're talking about it's an\noperator that acts on a model and\nresults in a sort of another system that\nis able to answer questions in some way\nbetter than the original model\nokay and so concretely the training\nprocedure that I want to talk about here\nuh that is sort of going to attempt to\napproximate this HDH object using this\namplification operator is fundamentally\nvery simple so what we're going to do is\nwe're going to train some model to\nimitate the amplication operator applied\nto that model so what does that mean so\nwe're going to say uh you know a human\nwith access to that model we're going to\ntrain the model to imitate that\num\nthat's the most basic idea we're going\nto throw on uh I actually don't want to\ntalk about this quite yet\num so yeah so let's just stop right here\nfor a second so the idea is we're going\nto train a model to imitate a human with\naccess to that model this is the most\nbasic you know training procedure\num that's why I promised I would return\nto sort of trying to understand how this\nworks in the context of you know\nthinking about uh you know concrete\ntraining procedures\num fundamentally uh oftentimes what you\ndo if you take you know we talk a bunch\nyou know previously about you know\nlanguage model pre-training where we are\nin fact just taking a model and training\nit on a bunch of human texts in some\nsense you can think about that as the\nsort of first iteration of this\nprocedure we were just training on\nimitating a human rather than a human\nwith access to a model you're just\ntaking a human straightforwardly\num\nnow what this is saying is it's saying\nwell you know if you're only training to\nimitate a human then you can sort of\nonly safely you know go up to the level\nof you know what a human would plausibly\ndo and this is sort of what we talked\nabout last time where if you just have a\npredictive model and that model is just\npredicting what a human would do\nor you know across a possible\ndistribution of possible agents once you\nstart asking for things that are beyond\nwhat any human would possibly do you\nstart to run into issues where now\nthere's sort of no plausible human that\nwould do that task and instead you get\nother weird things like potentially uh\nyou know an AI system doing that task\num and so in this case we're like well\nyou know instead of just trying to\npredict humans we can do we can try to\npredict something that we also think is\nsafe maybe but that is sort of has the\nability to maybe sort of go beyond the\ncapabilities of what an individual human\nwould do which is a human with access to\nthat model\nand so um you know I think that one of\nthe sort of key things here also is you\nknow how do we think about uh you know\nsetting this sort of thing up\nin some cases like you were saying one\nthing that you can do is you know things\nlike prop chaining and uh you know where\nyou're not necessarily having a human\nLoop a lot of times you know maybe you\njust train the model to imitate a human\nand then once you have a good human\nimitator you try to set it up in you\nknow some sort of uh you know\namplification scheme like this where you\nhave a model Consulting other copies of\nthe model in various different ways\num\nI'm not going to go into too much detail\non what those sorts of setups might look\nlike but it is another option you know\nthis sort of relates to this where\ninstead of literally having a human Loop\nyou could also you know just train some\nsystem to imitate a human and then you\nknow replace the human with that we're\nmostly just going to be imagining though\nthat we literally do have the human Loop\nhere you know the thing that we're\ntraining on in this particular example\nis literally a human with access to our\nmodel we've gotten a bunch of you know\nsamples of what the human with access to\nour model would do and then we're\ntraining to imitate those samples\nokay does this setup makes sense I've\nyou know a little bit you know talked\nabout a lot of variations on this setup\nI think it's very tricky because\num you know I really want to talk right\nnow just about something very specific\nbut there's a very large class of\npossible things that are related to this\nsort of idea that are you know\nvariations on it and we're in fact even\ngoing to talk about some of the sorts of\nvariations later on but I think that\nthis is in some sense the sort of most\ncanonical straightforward version of the\nstyle of approach we're saying uh you\nknow we want to imitate something that\nis more powerful than a human the most\nbasic more powerful thing than a human\nthat we might have access to is a human\nwith access to our model and so we're\ngoing to imitate that\nokay now you know there are some sorts\nof issues that we obviously are you know\npotentially going to run into when we're\ntrying to do this uh you know the most\nbasic issue is well uh there's a thing\nthat we want to get you know our\ntraining goal we want to get a model as\na result of this that is in fact trying\nto imitate something like this hch\nprocess and we'll talk about why that\nmight possibly be the case in a little\nbit\num but we might not get that right for\nall of the reasons that we've talked\nabout previously you know we might get\nit deceptively aligned agent we might\nget you know some other sort of weird\ntype of system that is not doing the\nexact thing that we want and so you know\nwe'd like to at least have some sort of\nyou know understanding of how we can\ncreate safeguards and abilities to check\nand verify our models as we're training\nthem\nso one of the nice things that we have\naccess to in this sort of a setup is we\nalways have access to at every\nindividual point in training we always\nhave access to a sort of version of our\nmodel that is better than our current\nmodel\num which is the amplify version of our\ncurrent model so in every individual\npoint in training you know we're\ntraining some some original model to\nimitate the Amplified version of itself\nwell at every point in training then we\nhave access to an amplified version of\nthe model that can serve as a sort of\noverseer uh it can sort of throughout\ntraining say you know I'm you know my\nactual make make evaluations about how\nmuch you know the current model is in\nfact doing the right thing\nnow this is a very tricky thing to be\ndoing and we'll talk in a little bit\nabout you know whether this might work\nor might not work but the basic idea as\nto why this might be a reasonable thing\nto do is well we sort of have in this\nbasic setup we have a thing sitting\naround then in some sense is more\npowerful than our current model because\nthe thing we're training our model to\nimitate and you know maybe you know\nbecause it's more powerful than our\ncurrent model in some sense it can act\nas an overseer it can uh you know look\nat our model maybe with transparency\ninterpretability tools maybe with you\nknow just by you know trying to\nunderstand what it's doing by\ninteracting with it and have some\nunderstanding of you know is the model\nessentially doing the right thing is it\nyou know in fact being trained in the\nright way\num and so this sort of thing uh uh we\ncan sort of add this oversight where we\nhave the ampified version of the model\noverseeing the training of the new model\num and in this case we sort of have this\nintermittent oversight idea where well\nat each individual point in time once we\nsort of trained arm uh you know one\nparticular iterational model to imitate\nthe Amplified version of that model we\ncan you know get some new model and we\ncan oversee it you know by using the\nAmplified overseer to check you know do\nwe like this new model question\nMike somewhere yeah okay\nI'd like to ask what exactly we mean\nwhen we say imitative here and both\nsides so how does debilitation process\nlook like is it again like reinforcement\nlearning but from AI feedback then or\nlike yeah or is like inverse reinforce\nthe burning\num\nyeah perhaps add to that first and then\ndeal the set video\nyou shouldn't look like here's the\nquestion so\nso is that um I mean there's a lot of\ndifferent ways we could set this up\nright now we're imagining something that\nis essentially just supervised you know\nfine-tuning is the idea we're saying you\nhave some model you can collect a bunch\nof data of a what a human would do when\nanswering questions given access to that\nmodel that gives you a data set that\nthen you can you know supervise fine\ntune on that you can just train to\nimitate uh that you know new data set\nwe'll talk\num in a little bit about a sort of\nvariant on this idea where maybe instead\nof doing supervised fine-tuning you're\ndoing something else like\num you know like rlh or something like\nthat in this case that's not what we're\nimagining we're just imagining\num a situation where you're doing you\nknow straightforward supervised\nfine-tuning we just gather a bunch of\nsamples from the model interacting you\nknow the human with access to the model\nand then we we train on those hands\nokay and then the oversight parts\nyeah yeah so what is the oversight here\nso I think that that is maybe maybe one\nof the most confusing bits here because\nI think part of the problem with an\napproach like this is that\nand this is going to be a problem for\nbasically all of the approaches we're\ngoing to talk about today we don't\nreally\ngo in that oversight step I think that\nit's very\nvery knowledge that some sort of\noversight needs to be done that it's\nvery hard to be confident that this\nprocedure is actually going to produce\nthe sort of model that we wanted to\nproduce and we'd like to be doing\nsomething to be overseeing that process\nto be checking whether it is doing the\nsort of you know it is in fact the sort\nof model we want I think a key problem\nthough is we don't actually know what\nsorts of things that overseer could do\nthat would make that verification\npossible\num you know we can sort of speculate on\nsome of the sorts of things we might\nwant you know we want things like you\nknow some sort of way to do you know\ntransparency to look inside the model\ncheck you know what sort of things has\nit learned is it is it you know doing\nthe right sort of thing or the wrong\nsort of thing you know some ways to\ncheck you know is it deceptive you know\nI think the problem is that we don't\nreally know what good means would be to\ncheck that you know one hope that you\nmight have if you were just you know\ntrying to go full steam ahead with an\napproach like this right now is well\nmaybe the overseer you know the\nAmplified version of the model will be\nsmarter than us and it'll figure it out\nbut that's of course always a really\ntricky thing to rely on because well we\ndon't know you know it might you know\nnot be able to figure it out\nand so um it's very unclear\num you know what what that sort of\noversight might entail right now\ncool\nokay\nokay so let's talk a little bit about\nwhat this sort of um you know the\nlimited sort of procedure looks like you\nknow why you might expect you know to\nget something like hch or you know how\nit relates so\num I want to sort of take a step back\nand before we delve you know again into\nthe sort of details of you know\nconcretely what if you actually ran this\nprocedure I want to take a brief moment\nto understand what is the theoretical\nlimit of it right so if in theory you\nyou know have this property that every\nsingle time you train the system to\nimitate some other system you actually\ngot a copy of the system you're\nimitating which of course as we know is\nnot true you know in fact you just get\nyou know the sort of mechanistically\nsimple algorithm with a large Basin you\nknow that just that you know in fact is\na good job at fitting that that data but\nyou know if we imagine that you actually\ndid get a perfect imitation of the thing\nyou're trying to imitate what would we\nget\num and so we can take a look at this\nsort of you know tree that we have here\nwhere we're taking some model we train\nit to imitate the Amplified version of\nthat model we get a new model and then\nwe iterate this process if we unpack\nthis what does it look like well\nso again you're right so we're going to\nimagine that you know each imitation\nprocedure you know imitates perfectly so\nall of these sorts of things here are\ndirectly equal\nand then we can sort of unpack these\namplification operators right so you\nknow we have the individual model\ntrained imitate a human with access to\nthat model and now we get a new model\nwhich we're going to assume is\nequivalent to the Amplified version of\nthe original model and then we train you\nknow a new model you know to uh imitate\nthe Android version of that model and so\non and so what we get is sort of this\namplification operator applied over and\nover again\num and if we expand that out what is one\nyou know application of the\namplification operator well it's a human\nConsulting you know the thing inside of\nthat amplification operator\num and so then we can expand that out\nagain and and again and we see that\nwe're starting to you know approach\nsomething like this hsh object where you\nknow if we think about what the\ntheoretical limit of this sort of thing\nis we're approaching something where we\nhave a human Consulting humans\nConsulting humans\num now of course any individual finite\ntime that you know the leaves of this\ntree are going to be whatever this sort\nof original model was that we started\nwith and not actual humans\num even you know in this sort of\nlimiting case that's still going to be\ntrue\num but this is the sort of idea is that\nthis procedure is in some sense\num in you know some sort of theoretical\nlimit of perfect imitation approaching\nsomething like this hch object and so\nthe sort of thing that we might hope to\nget out of a procedure like this you\nknow the sort of training goal the\nalgorithm we might want is something\nthat is in fact just directly trying to\nimitate that each stage object\num and a model which was in fact just\ntrying to directly imitate that hch\nobject would at the very least be\nconsistent with the goal that we're\ntraining on it would be you know a model\nwhich does have good performance\num on this data it would be you know at\nthe very least consistent with this now\nwe might not necessarily get it you know\nbecause we don't have perfect imitation\nthere's lots of sort of you know\npotential issues here but this is at\nleast the sort of theory behind why you\nmight like something like this\nokay and sort of how you might try to\nanalyze you know what can happen Yeah\nquestion or just to clarify so this\nwhole approach is a solution or a\nsolution to Outer aligned right not\ninterlining because there's no\nguarantees about the inner properties of\nuh the Amplified models yeah so we're\ngoing to talk in a little bit about you\nknow if you were trying to in fact do\nthis if you're trying to do this thing\nwhere you imitate you know a human with\naccess to the model how would you you\nknow feel about that from an underlying\nperspective an inline perspective all of\nthese things\num right now where we're talking about\nthis just you know how good would it be\nto in fact have hch that is just an\nouter alignment question because it is\njust about this you know what is the\nactual thing that we want to get and if\nwe got that thing what would it look\nlike and how would we like it right if\nwe're trying to understand the question\nof if we actually got a system that was\nattempting to mimic hch would we like\nthat system that's an underlyment\nquestion\num but we also do care about the inner\nalignment question here you know we do\nreally care you know would we get this\nright there's no necessary guarantee\nthat our system would in fact produce\nsomething that is trying to mimic you\nknow some sort of HH process this\nthere's lots of other things that it\ncould be doing and so you know we do\nreally want to understand you know How\nlikely is it to be doing all of those\nvarious different possible things\nokay okay so let's try to go through\nthis so you know we have this approach\nuh so we can try to analyze it you know\non these sorts of criteria that we have\nso uh right outer alignment you know we\nhave this basic question right you know\nthe thing that we're trying to get is an\nhch imitator you know we want a thing\nthat is just sort of trying to mimic\nwhat this HD this sort of theoretical\neach process would do uh and so if\nthat's the sort of goal that we're\ntrying to achieve here we can just ask\nthe question well would that thing be\naligned and like I was talking about\npreviously I think the answer is\nextremely unclear\num there's certainly some arguments that\nyou can make for why it would be really\ngood to have you know a system like this\nit's just human cognition it's a lot of\nvery powerful human cognition\num but there's lots of reasons why you\nmight not think it's aligned you know in\nsome sense you know one way I have I've\nheard this sort of each stage process\ndescribed before is the infinite\nbureaucracy you know in some sense we\nhave you know layers upon layers of you\nknow various different humans\ninteracting with other humans\num you know that sort of thing might go\nwrong in various different ways in the\nsame sorts of ways that you know\nstandard bureaucracies might go wrong or\nit might go very right um I think it's\nvery unclear um you know how to\ninterpret you know what this sort of\ntheoretical object\num would do how much we would like it\nbut that is the thing we're trying to\nget\num okay but then of course we have the\nquestion you know would we actually be\nable to get it\num and it's very very unclear\num you know this sort of you know\nquestion as well so we at least have\nsomething that we're trying to do here\nright which is well we have at each\nindividual point in time we can do this\noversight where we have you know the\nAmplified version of the model you know\nproviding some you know ability to look\nat the model being trained and sort of\nyou know say well you know is this doing\nthe right thing\num\nbut like I said previously it's very\nunclear what that model would be doing\nuh you know whether that would be\nsufficient\num and in particular I think that one\nthing that's sort of really important to\npoint out here is just because it is the\ncase that the Amplified model is in some\nsense sort of you know stronger than the\noriginal model you know it is you know\nit is sort of more capable we think\nbecause it is multiple copies of that\nmodel organized by a human doesn't mean\nthat it is better at the task of\nunderstanding what a model would be\ndoing than the original model is at you\nknow something you know like hiding that\nuh you know what it's doing or even if\nit's not hiding what it's doing it's\nvery unclear how that capability of\nunderstanding what another model is\ndoing scales as we do this procedure\nthere's no reason necessarily to believe\nthat just because a model is in some\nsense more capable me is uh you know\nmeans that is going to satisfy the\nthreshold of being able to successfully\ninterpret and understand the you know\nthe original model\num that's potentially very high\nthreshold and there's a sort of offense\ndefense balance parent you know sort of\nquestion here if we're imagining you\nknow the model being trained is\npotentially deceptive\num and you know maybe potentially trying\nto hide the fact that it's deceptive\nfrom you know the Amplified version of\nthe model well there's a lot of things\nthey can do to pretty you know\nsuccessfully achieve that one thing that\nit can do is well you know it is the\nmodel of being Amplified and so it can\njust you know you know act in such a way\nin that application process to not help\nus in trying to understand what the\noriginal model being trained is doing\num and even if in some sense we do have\nthe ability to get it to in doing the\noversight in fact be really trying to\nhelp us doing the oversight we still\nhave the issue that it may just be\neasier for a model to hide uh from\nchecks than it is for us to produce\nchecks that find and understand what\nit's doing we don't really know what\nthat balance looks like what the sort of\nbalance looks like between how easy it\nis to understand you know and figure out\nwhat models are doing and try to you\nknow fix them as opposed to how easy it\nis for you know models uh you know to\nhide from that or even just how easy it\nis for grading descent to obfuscate\nthings and you know to what extent\ngradient descent does that by default\nand so understanding that balance is\nextremely tricky and it's very unclear\nwhether we'd be able to you know to what\nextent this sort of oversight mechanism\nwould help at all\num\nwe can of course you know also ask other\nquestions and you're like well even if\nthe oversight mechanism doesn't help we\ncan do the same sort of inductive bias\nanalysis we talked about previously and\njust asked well by default you know how\nlikely would be to in fact get an hch\nimitator but then of course that\nquestion is also very tricky because we\ndon't know you know how you know\nplausible that sort of a model would be\nyou know on these sorts of inductive\nminuses we don't know exactly what the\nduct devices would look like and so\nmaking a case here that this thing would\nwork I think is\num very tricky\num but it's not you know certainly I\nthink there's you know we could imagine\na situation where we have a lot more\nknowledge and understanding of how this\nmight go where we could make a really\nstrong case this would work\nquestions Yeah question\num the so for the oversight step like\nsuppose that we do we do this HH\nintermittent oversight and it just turns\nout that like yep like the fourth\niteration is always Super Evil and\nmisaligned what do we do\nah it's a really good question I think\nthat one sort of thing that's a little\nbit tricky about this approach and this\nis sort of I I'm glad you asked this\nquestion because I think it'll sort of\nsegue nice into the next approach which\nis that um there is sort of an issue\nhere you know where we have this they\nhave this set up where we're doing this\nsort of you know intermittent checks but\nif those checks fail it's very unclear\nwhat we do next you know in some sense\nwe have evaded the problem of training a\nyou know thing that was very dangerous\nbut we haven't necessarily you know\nsatisfied our sort of competitiveness\nburden of actually producing a model\nthat was that was safe and is able to do\nthe things now\nwe'll talk about one way in which you\ncould you know modify this procedure\nvery slightly to potentially try to\naddress that problem though that\nmodification will also introduce its own\nsort of host of of issues um so I'm\ngoing to punt on that a little bit until\nnext time I think right now you can sort\nof sort of Imagine well if it turns out\nthat things work then at the very least\nwe get them all again right we get\nanother chance we get a chance to be\nlike okay this didn't work let's back up\nand try something else and maybe that\nlets us you know Salvage our position\nokay okay so let's talk a little bit\nabout the competitiveness burden that we\nhave to deal with here so we have this\nimplementation competitiveness you know\nisn't in fact uh you know competitive to\ndo this training procedure I think that\none thing that is sort of nice going for\nus here is that this basic procedure of\nyou know just doing you know supervised\nfine-tuning on you know data of you know\nhumans with access to models is very\nstraightforward thing for us to do with\nyou know current systems this is the\nthing that we do know how to do you know\nwe can we do all the time where we\ncollect a bunch of data of humans\ninteracting with models we can collect\nlarge question answering data sets we\ncan you know effectively fine tune on\nthem and so this is something that you\nknow you sort of within the scope of the\nsorts of things that we can sort of\nImagine in fact implementing\num so that's nice\num\nand again we also have the performance\ncompetitiveness Vernon you know is an in\nfact the case that hch if we actually\ngot a thing that was trying to imitate\nhch would be capable of doing the sorts\nof things that we want to do and then\nthis is also very unclear so we talked\npreviously about this sort of question\nof well you know is it in fact\nsufficient you know if you have a bunch\nof humans you know taking some you know\nsmall amount of time to answer\nindividual questions and you put them\nall together into this massive sort of\ntree you know can they you know work\ntogether to effectively answer really\ncomplex questions\num and I think we don't know I think\nit's very unclear you know it may be the\ncase that you know for humans to really\neffectively do you know powerful\ncognitive work they really need to think\nabout things for a long extended period\nof time in a way that is you know can't\nsort of successfully be factored into\nall of these sort of individual calls\num\nor it may be the case that you know\nthat's not true that we actually can\nFactor things effectively that you know\nNH stage would actually be able to\nanswer all of these sorts of things\num I don't think we really know the\nanswer to that question\num definitively\num I think that you know probably the\nway that it uh you know in fact works is\nthat it's going to be okay at some tasks\nand not as good at other tasks and so\nthen the question will become how does\nthis fit into some sort of you know\nbroader portfolio of when we want to use\nvarious different approaches versus\nother approaches so you know we've\ntalked previously about things like\npredicting you know predictive models\nand microscope AI is various different\napproaches that might help us you know\nmake individual you know models with\ndifferent levels of capabilities on\nindividual tasks you know safe uh you\nknow in those particular situations I\nyou know I think probably a similar\nthing is going to happen here where you\nknow it is not in fact going to be the\ncase that hch is going to be able to\nsolve all of your problems there\nprobably will be things where HTH is not\nvery good but if you in fact were able\nto successfully get hch you know\nimitators as as a result of this\ntraining procedure I think there would\nbe at least a bunch of tasks that you\nwould be able to then safely do that you\ncouldn't do previously\nquestion\nso in something like a better way of\nputting easy to say to sufficiently\nUniversal or perform all the tasks for\nwhich you might want AGI with a better\nway of putting this be something like\ncan hch perform all the tasks that other\nagis we know how to build would do\nbecause like competitiveness is based on\nwhat we can currently do right if hch\nwas the strongest AI available even if\nwe couldn't do everything we might want\nit would still be competitive by default\nyeah so I think that's uh that's a\nreally good point it is absolutely the\ncase that we are comparing against you\nknow in fact what other things we could\nplausibly build\num though I will point out that you know\nin many cases you know what are the\nsorts of things we started this talk\nwith right was we want to understand you\nknow how could we come up with systems\nthat we'll be able to you know make\nthings aligned off you know into the\nfuture as we start getting into\nsituations where you know some of the\napproaches we talked about previously\nyou know start to break down right and\nyou know again we're seeing that well\nthis sort of approach might also break\ndown at some point you know there's some\nthere's a limit to you know at least it\nseems like there's probably some limit\nto what HH can do and what hch you know\ncan't do and so even if this approach\nworked perfectly there would still be\nsituations where you know it wouldn't\nwork but maybe it can extend that\nFrontier a little bit you know we can go\na little bit further than what we were\npreviously able to able to do into the\nyou know regime of things that are only\nachievable you know safely via something\nlike this maybe I mean that depends on\nall of these sorts of inner alignment\nstuff working out as well that we can\nactually successfully do this imitation\ntask that we actually get something\nwhich is in fact trying to imitate hch\netc etc but\num so you know and again like I said\npreviously also you know in many cases\nit's okay if some of these sorts of\nproposals are you know only competitive\nin various different particular\nsituations because we don't know what\nsort of situation we're going to find\nourselves in and in fact we're likely to\nfind ourselves in many situations\nbecause as our AIS get more and more\ncapable and as we deploy them in various\ndifferent situations on different tasks\nwe're going to need to continuously\nensure that each of one of those sorts\nof things is aligned and you know doing\nthe right thing and so there may be\ndifferent approaches which make more\nsense in different situations\nokay\nokay so we have this approach we have uh\nyou know hch with intermittent oversight\nI mentioned previously that um you know\nwell there is sort of a core problem\nhere uh you know among many but one core\nproblem is this issue of well what\nhappens to the internet oversight fails\nuh you know what do we do is there some\napproach where we really like this\napproach we think the intermittent\noversight is actually a really effective\ntool but we want to you know be able to\nuse the email oversight not as just sort\nof a check but as a way to effectively\nchange our you know the way that we're\ntraining our model to in fact you know\nfix any misalignment we discovered\num so the next thing we're going to talk\nabout is an approach this sort of trying\nto do that which is this sort of\nimitative amplification plus relaxed\nadversarial training so the idea here is\nrelatively straightforward so previously\nthe thing that we were doing was just\ntraining on imitating what the Amplified\nversion of the model would do now we're\ngoing to add in an extra thing to train\non which is well we want to train on\nmaking it the case that the Amplified\nversion of the model says you know\nA-Okay to the model being trained so\npreviously we're sort of only doing this\nextra check you know we had on top we\nhad this Amplified version of the model\nsitting around you know we thought it\nmight work well as an overseer to sort\nof help guide the training process and\npreviously we were doing this sort of an\nextra check and now we're like well you\nknow what if we just directly train on\nthe objective of making it so that the\nyou know our model passes the check\nokay so this is in some sense a really\npowerful thing to do and also a really\nscary thing to do\num it's really powerful because now we\ncan sort of really you know much more\neffectively guide the training of our\nmodel based on what our overseer sort of\nwanted to be doing but it's also a scary\nthing to do because you know we might\nend up you know finding something that\ntechnically meets the specifications of\nwhat the overseer is looking for but in\nfact is doing something really you know\nproblematic in a way that we don't want\num\nbecause we've sort of explicitly\noptimized our model to look good to us\nand things that look good you know to\nthe amp Floyd overseer might not\nactually be good so we're sort of\num you know this is the sort of\ntrade-off that we make if we try to go\ndown you know something like this yeah\nquestion\nso if we are using amp bearman to police\nand M what's to stop it from Just\ncooperating with other copies of itself\nand telling us that everything's fine\nlike we don't trust humans to oversee\nthemselves for obvious reasons\nyeah so I think this is a really key\nquestion here uh especially in this sort\nof approach I think that the basic thing\nthat you might hope is that maybe you\ncan catch things that sort of are prior\nto the point in which your model becomes\nyou know able to you know and and\nwanting to sort of cooperate against you\nin that way so if you have a situation\nwhere um you know you're trying to look\nfor you know your amp flight overseer is\ntrying to look for things like is the\nmodel you know\num non-miopic does it have some\nnon-miopic objective across time\nHorizons you know these sorts of things\nwe talked about previously are like\nnecessary for deceptive alignment to\narise\nif you had your amp flight overseer\nlooking for things like that then you\nmight expect that well you know by the\ntime it became deceptive and was in fact\ntrying to fool us and trick us or\nwhatever and you know would cooperate\nagainst us in overseeing um yes we would\nyou know things would not go well but\nhopefully we can avoid ever ending up\nthat situation because the training\nprocess is now such that you know in any\nindividual point in time if we start to\ndevelop the sorts of things in our model\nthat would be necessary for that model\nto eventually become deceptive then\nwe'll you know train them away because\nthe overseer will see them we'll be like\nthis is bad and that will create a\ntraining signal that will push it away\nfrom that sort of basin right so that's\nthe sort of idea here I think is that\nyes if it is in fact the case that you\nend up with a model and that model is\nyou know deceptive and you're trying to\nget the deceptive models oversee itself\nit won't it's not going to work because\nyou know the deceptive model is just\ngonna be like nothing to see here\num but of course you know there is\nsomething that we can do which is what\nwe can try to you know intervene prior\nto that point we can try to say well\nthere are other things that we might try\nto look for in the model that we don't\nwant the model to be doing uh such that\nyou know if we can ensure that model not\ndoing those things then we know it'll\nnever become deceptive then we can sort\nof end up in the right path you know you\ncan think about this as like well you\nknow if if you know if your model just\nlike starts out you know really helpful\nand is really trying to do the right\nthing and um\nand then there's sort of you know an\nequilibrium where you know a helpful\nmodel if it starts to see you know the\nmodel being trained starting to go in a\nnon-helvel direction we'll correct it\nand say no this is the wrong thing to do\ngo back in the right direction\num but of course there's another\nequilibrium where if your model is you\nknow totally deceptive and it's trying\nto trick you it then it's going to make\nsure the model stays you know trying to\ntrick you and so in some sense we can\nsort of think about what this is doing\nas an equilibrium sort of selection\nproblem where there's like two basins\nthere's this there's the one where you\nknow the helpful you know useful you\nknow aligned model you know keeps it in\nthe aligned Basin and there's the you\nknow deceptive model that keeps it in\nthe deceptive Basin and we're saying\nwell you know each one of each one of\nthose basins would individually be\nindividually be you know a good Final\nSolution but maybe you know if as long\nas we start out sort of approximately on\nthe right path we can guide ourselves\ninto the sort of basin that we want\nrather than the one that we don't want\nuh by using you know the overseer right\nso if we start out with an overseer that\nis reasonably aligned in some various\ndifferent ways you know kind of ish\ntrying to do the right thing then we can\nsort of continuously push the in that\ndirection and of course you know the\nOverseer in this case is not just the\nmodel rate it's a human with access to\nthe model and so we have some reasons to\nbelieve you know that especially if that\nhuman is you know doing some real\nimportant work that it's you know\nkeeping things approximately you know\nalong towards the right Basin because\nthe human is you know trying to you know\nlook for the sorts of things that we\nactually wanted you know to be looking\nfor\nokay but but of course I think that this\nis like a very very sketchy thing to be\ndoing I think that there's a lot of\nreasons you might like this and in fact\nthis is you know approach I have thought\na lot about I think it's you know pretty\nreasonable thing to do but it's also\nvery very tricky because you know you're\nin this sort of situation where you have\nto believe that you're actually going to\nend up in the you know the right sort of\nequilibrium here\nquestion\nalso if we are like automating some of\nthese adversarial attacks then wouldn't\nit be like for instance let's say we're\ntesting for non-miopia the only way that\nam could check for non-myerobia is if am\nunderstands the concept of non-wire it\nhas to understand the concept of giving\nup reward now to get reward later in\norder to check to see if a copy of\nitself is doing that\nyeah I think that when I say like check\nfor something like does it have a\nlong-term goal we expect that even\nsystems that don't have long-term goals\nwill at least understand the concept of\na long-term goal right\num I think that that's going to be\nreally hard to not have I mean in many\nways I think many current like language\nmodels you know can understand the basic\nconcept of a long-term goal\num and so that's going to be a pretty\nhard thing if that's the thing you're\nlooking for is like okay we want the\nmodel to not even know what a long-term\ngoal is I think that's gonna be a really\nhard thing to make work in this case but\nmaybe there are other things like\nwhatever it means to in fact not have a\nlong-term goal uh you know that you\ncould effectively you know try to verify\nvia scheme like this\nokay so we're going to try to analyze\nthis you know again you know relatively\nquickly you know but using the same sort\nof criteria so you know again we have\nthis outer alignment you know here we're\nstill trying to produce something that\nis imitating each stage that's sort of\nstill the goal here uh and so the outer\nalignment question is very similar\num we do need it to be the case that the\nlike you know all of the sort of checks\nthat we're doing uh are consistent with\nthe hch imitator if there's no h stage\nimitator that would pass any of our\nchecks then we're sort of in trouble\nhere\num but you know as long as that's the\ncase you know again the sort of thing\nthat we want is just this HTH imitator\nit's the same sort of story previously\num\nand then the inner alignment question of\ncourse I already talked about this but\nyou know we have this sort of key\nquestion of can the overseer you know\ncan the amplify version of the model\nthat is you know overseeing this\ntraining effectively you know push it in\nthe right direction right you know can\nit you know oversee it in such a way\nwhere if we train the model to pass that\noversight pushes it towards you know the\nright base in and not the wrong base and\nthis is a really tricky thing but you\nknow there are some things that we might\ntry right like you know trying to you\nknow verify things that are precursor\ndeception that help us you know push\naway from ever entering into this sort\nof deceptive basin\nokay and then again implementation\ncompetitiveness is very similar here\nexcept with the sort of key issue of\nwell now we have to actually do this\noversight you know continuously without\ntraining uh you know throughout training\nand so if that oversight is potentially\nvery expensive you know if our\ntransparency Jewels you know or whatever\noverseer is doing you try to understand\nthe model are very expensive and\ndifficult for us to operate\num you know that could be sort of\npotentially an issue here\num uh you know so we need to you know to\nbe in some in some way that we can sort\nof automate those sorts of tools\neffectively\num and then we have you know again the\nsort of same essentially the same\nperformance competitiveness question\nbecause we're still trying to get\nsomething that is basically just\nimitating hch\nYeah question\nso for this idea is the idea to Simply\nhave like a negative\nthing we try to avoid or a positive\nthing we try to aim towards and if it's\na positive thing we try to aim towards\nhow do we differentiate between the\nthing we want and do the thing that\nlooks like the thing we want\nyes I think this is a really good\nquestion I think that it's very unclear\nI think that it seems like you know it's\ngoing to be easier to try to have an\noverseer which is looking for some\npositive quality rather than negative\nquality because\num you know it it can be really tricky\nto look for a negative quality because\nnow there's like all of these very\ndifferent possible ways which I think\ncould be false and there's sort of maybe\nonly one way for a thing to be true\num but like you say you know positive\nqualities can be quite tricky because\nthen it's like well how do we\ndifferentiate from it looking like that\nand actually doing that you know the\nHope here would be something like well\nyou know the overseer is better doing\nthis differentiation than us it may have\naccess to a bunch of you know tools to\nbe able to look at and inspect models\ninternally and that could help it do\nthis evaluation more effectively\num but at the end of the day I think\nthat's a core you know concern right it\nis just in fact not clear that we would\nactually be the case then in this sort\nof a situation we would you know the\noverseer would actually be able to\neffectively distinguish between the\nmodel looking like it's doing the right\nthing and actually doing the right thing\nand that's sort of one of the key\ndifferences right you know one of the\nthings that might make you skeptical of\nthis approach as opposed to the previous\none is that now you know we're sort of\nintroducing additional optimization\npressure against the overseas right\nwe're sort of directly optimizing for\nthe overseas things do look good to the\noverseer and if look good to the\noverseer is very different than what we\nactually want\num that you know that could cause some\nthat could cause some serious issues but\nof course it also has benefits over the\nprevious one because now you know the\nprevious one you know it just you know\ncould you know fail and then we have no\nrecourse but here you know we have some\nway to actually sort of address the\nissue\nokay okay\num all right so we have those two\napproaches which are these sort of both\nbased on the imitative amplification\nidea they're both trying to say well you\nknow we want to imitate this\namplification operator you know we want\nto imitate this HTH sort of you know\nprocess\num there's lots of other things though\nas well so I'm next thing I want to talk\nabout is sort of a variant on this sort\nof application idea but now we're sort\nof not going to be using imitation we're\ndoing something else instead and I think\nthat in many ways we've sort of talked\npreviously about something like uh you\nknow reinforcement learning from Human\nfeedback and this is going to be very\nsimilar to sort of how you might think\nabout a sort of scaled up reinforcement\nlearning from Human feedback approach\nbut we're going to call it\napproval-based applications what does\nthis mean\nhere's the approach so we're going to\nsay uh again we sort of have a model we\nhave the Amplified version of the model\nwhich is like a human with access to\nthat model and we're going to train the\nmodel uh on uh not uh previously which\nwe're sort of training the model and\nimitating the application uh uh\nAmplified version of that model but now\nrather than trying to imitate it we're\ngoing to try to maximize its approval so\nthe Amplified version of the model has\nsome you know uh extends to which it\napproves what the model is doing it can\nyou know give reward right it can see an\nindividual samples from that model and\nit can say you know I like this sample I\ndon't like this sample in the same way\nthat uh you know some sort of\nreinforcement linear feedback approach\nwould work where you can take samples\nfrom the thing you can score those\nsamples and you can you know train the\nmodel to have high scores according to\nthe you know the the sort of score\ngaming procedure and in this case the\nscore Giver here is rather than just a\nhuman uh it is you know this Amplified\nversion of the original model we're\nsaying well you know a human can only\ngive scores that are so good you know at\nsome point you know the score is given\nby a human will stop being very\neffective because uh there's a limit to\nthe sort of ability of a human's over\nyou know effectively oversee you know\nand understand whether you know a sort\nof individual thing the model has done\nis good or bad but maybe the Amplified\nversion of the model is better right the\nhuman with access to the model itself\ncan do a better job of being able to\noversee and understand what the model\ndid a good job or not and so we're going\nto say you know we're going to take them\nall and we're going to train it on\nmaximizing the approval of the amplify\nversion of the original model\num and then you know while we're at it\nwe can sort of also you know one of the\nother things that we want to have as\npart of the approval here is the same\nsort of relaxed Arizona training we\ntalked about previously in some sense\nthese are basically the exact same thing\nit is just the overall approval of the\namplify version of the model but you\nknow it's important to emphasize that\none of the things that we would like to\nbe contained in that approval signal is\nthe extent to which the amplify version\nof the model actually believes that the\nmodel you know being trained is doing\nthe right thing you know internally with\nwhatever you know oversight tools you\nknow transparency tools you might have\naccess to we want it to in fact you know\nbe be actually doing the right thing uh\nyou know to the greatest extent possible\nthey're able to verify\num and so that's the idea here is that\nyou know now instead of trying to\nimitate the the sort of amplite overseer\nnow we're training to maximize the\napproval of the amplite overseer\ndirectly um you know including all of\nthe same sort of black several training\num that we talked about uh previously\nquestion\nso what we've looked at like maximizes\nmaximization for an existing systems\nlike the thing that most books like\nadopt to a dark neuron or the sentence\nthat is most likely to result in the\nnext token we tend to end up with these\nalien looking images or these alien\nlooking sentences\nwhat makes us think that maximizing the\napproval of amp M won't lead to the same\nthing\nyeah so it's extremely unclear it\ntotally could lead to some really weird\nthings so some evidence as to reasons\nwhy you might not\num well so rohf you know in fact doing\nthis with current humans where we just\nwe don't amplify the human you know we\ndon't do some amplification at all we\njust have an individual human with\naccess to nothing doing the evaluation\num does often yield models which sort of\nlook like they're doing the right thing\nnow they might not in fact be doing the\nright thing and we talked previously\nabout you know how do we actually\nunderstand these models you know with\nsomething like the rhf conditioning\nhypothesis but in many ways you know\nthey at least look like they're sort of\ndoing the right thing\num because you know we've trained them\nto look like they're doing the right\nthing\num if they you know in fact look like\nthey're doing something really weird you\nknow uh Alien thing then well you know\nunless that really weird alien thing\nlook good to humans then you know\nhopefully you know it wouldn't be\nincentivized by this process\num of course there's the sort of key\nissue of you might be doing some really\nweird alien thing that looks good to\nhumans but is really not the thing that\nwe wanted right it's doing internally\nsome very strange algorithm you know\nsomething that is really not the sort of\nthing we wanted it to be doing but it's\nstill nevertheless looks good to the\nhumans The Hope here as opposed to just\ndoing something like vanilla or like\nJeff would be well maybe you know the\nAmplified version of the human is better\nat being able to understand and evaluate\nwhat the model would do than than just\nor the amplify version model you know\nthe human with access to the model is\nbetter at doing that evaluation than\njust the vanilla human or just the\nvanilla model right then now we have\nthis ability to for the human you know\ndoing the evaluation to query the model\nand use the model itself to help it do\nthat evaluation so maybe now it's harder\nto trick the human it's harder to find\nweird edge cases where the you know the\nvaluation is no longer effective\num and so maybe it sort of works in into\nsituations where the you know rohf would\nnot that's the sort of that's the sort\nof Hope of you know why you might like\nsomething like this Yeah question\nhow would we take the first step so my\nimpression is that the level of\nsub-humans react we have in our GPT now\nI don't think that the human vgbt can do\na better work at RL HF significantly\nbetter wrong than just in qumes alone so\nfor this amplification process to work\nwe need to get to some initial level\nwhere the model can already have the\nhuman and I would imagine but at that\npoint\nit's it might already get scary I don't\nknow\nyes I think\nyou just said which is this sort of you\nknow all starts to you know matter as we\nstart to get into the regime where you\nknow\num the model is actually in fact helpful\nfor the human doing the evaluation\num and I absolutely agree that things\ncould start to get scary you know as you\nstart to get into that regime you know\nbefore this really becomes relevant\num as we talked about at the beginning a\nlot of the approaches that we're going\nto be talking about today are really\ntrying to deal with the question of well\nyou know as we start to get into those\nfurther capabilities regimes what do we\ndo right so when you're in the sort of\nearlier capability of genes where you're\njust dealing with models you know\nthey're you know that that are just like\npredictive models then we can you know\nmaybe address them in other ways right\nwe can try to understand you know just\nconditioning them well making sure\nthey're doing safe things but as we\nstart to get into regimes where they're\ngetting more and more capable we need to\nhave other approaches that can help us\nyou know deal with the you know more and\nmore cable models and so that's sort of\none of the things that we might hope to\nbe doing here\num in some sense this is sort of like I\nsaid where this approach just sort of\ncollapses down to something like vanilla\nyou know RL HF when you're in a\nsituation where the model is not at all\nhelpful for doing the valuation but as\nthe model does become helpful for doing\nthe valuation maybe this helps you do\nthat evaluation more effectively and so\nwe're saying well this can sort of help\nyou you scale up things as you sort of\nstart to get into that regime\nmaybe I mean it's very unclear right you\nknow maybe the case that um you know it\ndoesn't help you know it could even be\nthe case that it hurts you know maybe\nthe you know in some say in some cases\nyou know the model uh you know like we\nwere saying previously of like models\ndeceptive or something like that it can\nyou know hurt your ability to evaluate\nthe model because it can you know\nsabotage the human evaluation because\nthe model doesn't want to be evaluated\nyou know effectively and so there's all\nsorts of ways in which things could go\nvery strange here\num you know it could just be the case\nthat right now that we cut we're\ntraining on this you know particular\nevaluation signal we just you know good\nheart the evaluation signal by you know\nfinding some very strange you know\nsolution that technically looks good but\nin fact is doing something really weird\nso there's a lot of ways in which this\ncould fail but the basic idea is that\nwe're trying to take that you know\nsimple you know valuation signal that an\nindividual human produce and make it\nmake it better make it you know\npotentially able to scale beyond that\nokay question\nso I think I've\nmisrepresented my question from earlier\nso you may basically mention that you\nknow if am if AMPM is\nbasically dying off what it imagines a\nhuman would want and that's a good thing\nbut I guess what I'm saying is let's\ntake the Dolby example again if you try\nand maximize what kind of dog looks good\nto a human what you probably get is this\nincredibly adorable looking golden\nretriever or something\nbut if you try and maximize what looks\ngood to a language to a an image model\nthat can very well differentiate those\nfrom other things you end up with this\npsychedelic set of dog heads so it seems\nlike if m m is able to understand the\nhuman's preferences perfectly and even\ndo better then AMP one is M1 is safe\nbut it feels like a huge amount of the\ndifficulty is actually getting from M to\namp m in the first place when amp m is\njust not going to be the same as the\nhuman at the extremes that's sort of\nwhat I'm suggesting\nyeah I mean I think that the I think\nthis is absolutely valid criticism it is\ndefinitely the case that you know amp m\ni mean so it is important to understand\nright amp m is not just hch right it's\nnot like the overseer here is just like\nthe perfect it's just a bunch of humans\nthere's no models right like at each\nindividual point in time in training the\noverseer that we actually in fact have\nis just a human with access to the model\nand that is going to inherit all of the\nsorts of weirdness of whatever the model\nis that we currently have to give to the\nhuman and that can absolutely introduce\nsome really strange effects that can\nsort of make this tricky and so that's\nyou know that's why I think it's\nimportant to understand that the\noverseer right is it is AMPM it is not\nyou know anything more powerful than\nampam or anything weaker than I Am Fam\nit is just you know it is the best that\nwe can do it is the human with access to\nthe best you know model that we have as\nyou know so far in training human in the\nloop at every step there is a human and\nloop at every step here but it's not an\ninfinite tree of humans it's going to\nloop at every step right it is just a\nhuman in the loop with every step with\naccess to whatever the best model is\nthat we have\nyeah\nquestion\nI don't know higher level and this is\nbut why is the the same thing the same\nmodel\nup and down like why don't we like where\nmen we are worried about these\ncooperating with each other they\noverseer robot and uh trained robot why\ndon't we try to train two classes of\nrobots one that we actually want to use\nthen one that is specialized or hyping\nthe human in over C I don't know how\nthat would work but it is more natural\nah I think you could do this you totally\ncould train a separate overseer robot\nthan or you know overseer AI than the AI\nthat you're training you know\nindividually there's some reasons why\nyou might not want to do that most one\nmaybe the most obvious reason is like\nwell now you have to train two AIS and\nif any individual training run is you\nknow extremely expensive that could be a\nreally large competitiveness head\num there's also you know something that\nwe started this out you know with which\nis like well there's some nice property\nwhich is in some sense in each\nindividual point the overseer like is\nthe target of the training right like we\nbelieve that the overseer is stronger\nthan the model being trained because the\noverseer is the Amplified version of the\nmodel being trained and you know if we\ndidn't have that guarantee then maybe\nwe'd be less likely to believe that the\noverseer is actually going to be able to\nprovide effective oversight now I think\nwith that guaranteed it's it's like I\nsaid previously it's not an actual hard\nguarantee like it may be the case that\nthe overseers in some sense stronger but\nin some sense you know uh the task of\noversight is hard enough that that\ndoesn't matter\num so it's not it's not a hard guarantee\nbut you know it is maybe a nice property\nthat we want to try to leverage here\nokay so again we can sort of go through\nthe same sort of analysis so one thing\nthat I sort of want to you know talk\nabout very briefly is\num you know we were talking a lot about\nhch and I think it's natural to sort of\ntake something like this approval-based\namplification process and assume that it\nmust also limit to hch but I think an\nimportant thing to keep in mind\num is well you know all that we did is\nwe just trained on you know the\namplification operator and previously we\nhad this you know argument that well\nwhen we were doing imitation the\ntraining on imitating the amplification\noperator you know limits to something\nlike hch but I want to point out because\nI think this is really important that\nwhen you're doing something like\napproval-based application\num that's not the case so what is the\nlimit of approval-based application well\nso you know unpacking it we have a human\nand that human gets to consult two\nmodels right that's the individual uh\nyou know thing that we're trying to you\nknow maximize the approval of this is\none model Amplified right this is the\nAmplified version of them\num and then what we do is we train\nanother version uh uh you know another\nmodel to maximize the approval of you\nknow human Consulting uh M right\nuh and we sort of iterate this procedure\nand you can think about what's sort of\nhappening here as a sort of infinite\nchain uh you know infinite sort of tree\nlike each stage but previously we had\nthe sort of property that as each\nindividual sort of amplification\noperator expanded each one of these\nmodels in the limit should be equivalent\nto the sort of human thing that it's\nthat it's sort of training because it's\njust imitating it but that's not the\ncase anymore now it's just maximizing\nthe approval so instead of these sorts\nof you know direct tree of humans we\nhave a tree of sort of human model\nthings each imitating each other or so\nmaximizing the approval of each other so\nin some sense you can sort of think\nabout it as well it's like a human\nConsulting and model\nsuch that that model maximizes the\napproval of a human Consulting models\nthat maximize the approval of human\nConsulting models and so on\num and this is a really weird object\nright so I think that's worth sort of\npointing out is that the the limit here\nis no longer something really nice and\neasy to understand now it's very unclear\nright like you know how much we care\nabout the limit right because it's not\nthe case that we actually even get the\nlimit right so previously you know we\ntalked about how well there's this nice\ntheoretical object of each stage but we\ndon't actually know whether we're going\nto get anything that looks like the\ntheoretical HDH object in practice\nand of course we don't know what we're\ngoing to get in practice here as well\nbut it's at least worth pointing out the\nlimit here is much more messy right we\nno longer sort of should expect that hch\nis sort of a plausible thing that we\ncould get now we're getting something\nmuch weirder we're getting this you know\ntree of approval maximization\nquestion\nuh in the last talk you talked about the\nsort of rohs conditioning hypothesis\nwhich was just that like doing or\ntraining language small to do rohf is\nbasically almost equivalent to\nfine-tuning on but until you got some\nprompts so I mean in the same sense\ncould you say that like Max my model M\nmaximize the approval of like Amplified\nM or would itself also sort of could\nthat be a sort of conditioning thing too\nor so could in a sense could this like\npractically be equivalent to HH\nso I think that you absolutely can apply\nthis sort of relationship conditioning\nhypothesis if it is true to this\nsituation but even in that case I don't\nthink it would be well described as hch\nbecause in that case what you would be\nthinking about this as would be saying\nwell it's a human Consulting the model\nwhere the model is the conditional such\nthat when applied to the pre-training\ndistribution results in the best\napproval according to the the rest of\nthe tree and that's still a really weird\nobject right like it's a little bit\neasier to understand because it's no\nlonger either the case that it's just\nsort of like whatever the model would be\nthe maximize the approval now it's sort\nof a smaller class it's whatever the\nconditional would be the maximizes\napproval but there's still no reason to\nbelieve that that conditional is you\nknow some sort of imitation thing right\num you know it is often you know and in\nfact you know we probably shouldn't\nexpect that right like you know if I'm\ngiving approval to a thing I'm not\nnecessarily going to say the thing that\nis most you know approved by me is\nliterally me right maybe you know maybe\nthat's plausible but most of the time\nthat's probably not going to be the case\nand so um you know I think that we\nshouldn't you know even even in the case\nwith our all HF conditioning hypothesis\nis true we still shouldn't think of the\nlimit of this procedure as something\nlike hch it is something much weirder\nokay so I think this is just important\nto sort of understand what this approach\nis doing\num so again we can sort of you know try\nto work through these sorts of same\nquestions previously so we have the\nouter lineman question and the\nunderlyment question here is is quite\nweird because it is the question of well\nthe thing what is the thing we're trying\nto get I think in some sense the thing\nwe're trying to get is this sort of\nlimit it's this sort of tree we're\nstarting to say well we want the thing\nthat is just sort of in fact has the\nbest approval according to you know what\num\nyou know this sort of Amplified version\nof the model you know going all the way\ndown you know would say\num but that's a really weird thing to\nunderstand so you know to make that good\nwe really have to be the case that it\nneeds a visual point along the process\nwe're really verifying that you know uh\nwe believe that the thing that would\nmaximize the approval of the overseer\nthat individual point in time would\nactually be something that we'd be happy\nwith that the overseer is going to be\neffective at being able to successfully\noversee and give good feedback uh you\nknow be able to understand if the model\nis doing something bad then it gives you\nknow negative feedback and it's doing\nsomething good it gives positive\nfeedback\num you know in any individual point in\ntime you know across the whole you know\nthe process to really be guaranteed that\nwe end up with something sort of doing\nwhat we want\num and and so you know in some sense\nsort of this is this is you know also\nsort of inner alignment question here\nright we were sort of really relying on\nit being the case that the overseer at\neach original point in time is able to\nsay you know is it the case that this\nmodel is sort of doing you know going\ninto some really weird Basin is it\nbecoming deceptive you know is it doing\nsomething you know that we don't want\nand you know provide you know feedback\nthat is able to prevent it from you know\ngoing into that equilibrium we can sort\nof think about this again you know as\nthey're sort of being these two\nequilibrium here where you know if we in\nfact get a model that is really helpful\nin trying to do the right thing and it\nprovides overseer to its you know\noversight to itself it should say Yep\nthis model is really helpful it's trying\nto do the right thing so you know it\ngets good you know score according to me\nbut if you have a model that's really\ndeceptive then you know even though\nthere's a human in the loop it might\nstill be able to trick the human and you\nknow make it so that the human approval\nyou know is still really positive for\nthe deceptive model and so we sort of\nstill have these sorts of two\nequilibriums where you could end up with\nyou know a deceptive model providing\ndeceptive oversight or you can end up\nwith a really helpful you know good\nmodel providing good oversight and we're\nagain in this sort of equilibrium\nselection process where we're sort of\nreally trying to make sure the the\noverseer can effectively guide us into\nthe Basin that we want into the\nsituation where the model is you know in\nfact doing the right thing it's you know\nreally trying to you know do what we\nwant and do so in a way that uh you know\nit can you know continuously keep it in\nthat Basin once it's there which you\nknow it should should be uh in fact you\nknow a stable equilibrium because you\nknow you it should in fact only oversee\nyou know or you know\na really helpful good model should\nprovide good oversight to a really\nhelpful good model but whether we\nactually end up in that equilibrium is\nvery unclear\nokay and then uh you know again\ncompetitiveness so um implication\ncompetitiveness is you know is really\nnice you know here we we absolutely know\nhow to do effective you know rlh style\ntraining we can in fact train on uh you\nknow models uh you know train models on\nreward signals given by humans you know\ngiven by models\num\nperformance competitiveness is a little\nbit trickier because we don't really\nknow whether you know it is in fact the\ncase that this sort of tree this sort of\nmaximization process actually produces\nmodels we're able to accomplish the\ntasks that we want\num I think that there are some sort of\ninteresting sort of challenges here so\none example of sort of challenge is well\nit's very unclear if uh you can actually\nprovide oversight in such a way that\ngets the thing to actually do very\ncomplex tasks so you know if you really\nwant to do something really complex and\nreally difficult to evaluate like you\nknow build a you know rocket ship you\nknow it can be very difficult you know\nto distinguish between well if the\nrocket ship just looks good that doesn't\nmean it's actually going to be effective\nrocket ship right and so if you want to\nbuild a model it is going to be able to\nyou know successfully produce rocket\nships\num it's it might not be sufficient to\njust sort of have some oversee which is\nlooking at the model's output and\nevaluating to what extent it thinks that\noutput looks good because looks good\nmight actually be a lower bar than you\nknow uh the sorts of things that we care\nabout right it may in fact be the case\nthat an actual successful rocket ship uh\nyou know is very hard to build and very\neasy you know very very hard to evaluate\nthat you can't really tell whether it's\nin fact going to work just from you know\nlooking at it and sort of trying to give\nsome approval signal\num and so this sort of in some ways you\nmight even expect that this sort of\nhurts the model capabilities that you\nknow if it were able to really just\nthink through the problem itself and you\nknow sort of you know just do some sort\nof HH style thing where it's just sort\nof a bunch of you know you know\nmimicking something like you know an hch\nprocess where just sort of you know a\nbunch of humans thinking through exactly\nhow to solve the problem that they would\nbe able to solve you know produce\nsomething successful but if instead the\nmodel is sort of just producing the\nthing the sort of minimal thing which\nwould look good according to an overseer\nthe minimal thing that looks good\naccording to an overseer might be worse\nright it might be the case that in fact\nyou know if you really put a bunch of\neffort and you know good you know\nthinking African it's trying to you know\ndesign a rocket ship you can design a\ngood rocket ship but if you produce the\nminimal rocket ship that looks good to\nit over this ear it could boost a\nterrible rocket ship that just you know\nhappens to have plans that look\neffective\num and so I think it's in fact quite\nunclear to what extent this is actually\na competitive way to do things um it may\nbe it may be the case that the oversight\nthat we can provide is actually very\neffective that it can distinguish\nbetween you know good you know Solutions\nand Solutions but it could also be the\ncase that it's not effective that you\nknow in fact it's actually you know\nworse for a competitiveness question\nanother thing I'm thinking of though I'm\nnot sure if this makes it actually any\nworse than hch is if what if like at\nhigher levels we're trying to get it to\ndo things that humans don't know how to\ndo like for instance let's say we want\nit to build a flying machine and we\ndon't know how to fly and it comes up\nwith something that like the Wright\nBrothers playing and I might think well\nI mean I don't know why the model's\nsaying that looks good it doesn't even\nflap its wings how is it going to get\noff the ground\nso do you think that that would cause a\nproblem with the approval maximization\nthat wouldn't happen for hch because in\nhch I mean because I might not think of\nthat idea either\nuh yeah absolutely I think the thing\nyou're describing is absolutely sort of\nproblem that can happen here I think\nthat you know one thing that is worth\npointing out is that the way you're sort\nof hoping to try to avoid some of those\nproblems here is that the overseer is\nnot just a human it is an amplified you\nknow model right it is a human with\naccess to the model and so maybe the\nAmplified you know version is actually\nable to provide effective feedback in\nthat situation you know the human can\nask the model if you're like okay okay\nbut why are the you know the wings do\nthey not flap you know what is the\nreason for this and you know maybe you\ncan get some explanation that actually\nallows the human to understand and\neffectively evaluate it very unclear\nthough right so I think that that might\nhelp but it but it might not so but\nthat's at least the hope as to why you\nmight hope that you know that this sort\nof thing would work in that situation\nokay so you know this is our uh you know\nthat's approval-based application\nthere's other names also for for you\nknow describing sorts of schemes like\napproval based application I'll talk\nabout this a little bit at the end but\nyou know basically this is the sort of\nrhf you know style approach\nokay so I want to move on to another one\nso the next thing that I want to talk\nabout is AI safety via debate\nso what is a safety via debate so uh\nhere's the idea uh we're going to train\na model to win debates against a copy of\nitself in front of a human judge so we\nhave uh you know the model we asked some\nquestions so we're again sort of you\nknow operating this sort of question\nanswering framework uh you know we input\nsome question and then we sort of ask\nthe model you know what it thinks and\nthen we have another model which also\nprovides some answer and then those two\nmodels sort of uh you know have a debate\nyou know they sort of alternate you know\nsaying terms about you know what they\nthink about what the other model has\nsaid and then eventually the human sort\nof gets to look at the output of the\ndifferent models and decide you know\nwhich which you know thing that the\nmodel you know which model said the\nthing that they thought was more\nconvincing was was in fact more correct\num and then we train the model on the\nsort of approval signal of the human at\nthe end of the debate right so\npreviously we had this sort of approval\nmaximization where it was\num you know maximizing the approval of\nan individual human you know potentially\na human with access to the model now\nwe're saying well it is just a human\nthey don't have access to a model but\nthey are not evaluating an individual\nmodel output they're evaluating a whole\ndebate transcript they're evaluating you\nknow two models sort of arguing back and\nforth on what the you know correct\nanswer about some questions should be\nokay\num and there's a couple of other sort of\nbells and whistles that often will be\nadded on top of this sort of approach as\nwell one of them is cross-examination so\nwe'll often sort of allow a model to\nindividual points in the debate to sort\nof query other models from previous\npoints in the debate and ask them\narbitrary questions so you can have the\nsort of model at the end which you can\ngo back to the model the beginning and\nsay okay you said that thing previously\nyou know clarify this you know and\nexplain what that thing was this is the\nsort of thing that you you can do with\nmodels that you can't necessarily do\nwith humans in debate where you know you\ncan't go back and make a you know make a\ncopy of what the human looked like you\nknow an hour ago previously in the\ndebate and you know ask that alternative\ncopy you know extra questions that's the\nthing that you can do in you know an AI\ndebate that could help in various\ndifferent ways\num\nit can give you the ability to sort of\nuh you know make sure you know is the\nmodel actually answering consistently\nyou know if it answers one way in one\nsituation another way in one in another\nsituation then you know you can see that\nby you know making a copy of the model\npreviously and you know seeing that it's\nanswering different ways so there's\nsorts of things that you can do to sort\nof you know improve the scenario as well\nbased on just like you know the most\nsimple versions but that's the basic\nidea\num another thing that we can also add on\ntop is just sort of oversight so we can\nstill have you know the two models\ndebating against each other can have the\nability to you know whatever\ntransparency you know oversight tools we\nmight have we've been talking about\npreviously well the models you know here\ncan also have the ability to use them on\neach other so you know one way to win\nthe debate might be to prove that you\nknow your partner is you know trying to\ndeceive the human right you know if you\ncan use some transparency tools look\ninside the other model and be like ah\nthis model is you know doing something\nyou know dastardly uh you know then\nmaybe that's another way to win the\ndebate and so we can sort of try to say\nwell you know this is another way to\nincentivize the models to effectively\nyou know try to oversee each other\num yeah question\nso if we're training these models to be\nlike convincingly truthful and identify\nwise are we also training it to be a\nvery convincing liar that avoids being\ndetected by itself\nI've been training two different things\nhere one of them is very dangerous and\nwe're kind of coping the other one wins\nthat is I think exactly correct it is\ntotally the case that we are training\nthe model the sort of second debater in\nsome sense to lie right so the first\ndebater that sort of gets to pick their\nposition first it you know gets to pick\nthe the the best position the most\ndefensible position but then the second\ndebater is now in a situation where they\ncan't pick that they have to pick the\nnext best position and if the next best\nposition is you know less truthful then\nyeah we're sort of directly\nincentivizing the next debater to you\nknow uh you know be saying something\nwhich is untruthful now the sort of Hope\nis well if in the end you know we can\nincentivize the first debater to always\nsay the truth then the way you actually\nuse this model is you don't actually run\ndebates you just ask the first debater\nwhat their position would be and then\nyou use that as like the truthful answer\nand so you never actually query the\nsecond debater in practice when you're\nsort of deploying this model\num but of course like I mentioned at the\nbeginning these are the copies of the\nsame model just in in you know different\nsituations one of them is trying to play\nthe second debater and one of them is\ntrying to play the first debater and so\nyes you know to the extent that the\nsecond Debaters you know really learns\neffectively how to lie and try to\ndeceive then you know that also you know\nis going to be something the first\nInvader learns as well and of course it\ncould be the case that equilibrium isn't\nthe case right that the most convincing\nargument\num that you know the first Vader wants\nto make is not the truth right it could\nbe that the most convincing argument is\nin fact totally different of truth that\nthere's lots of ways to manipulate and\ndeceive humans and you know convince\nthem of things that are not true\num that you know the first debater will\nlearn to stay and set now of course it\nneeds to be the case that whatever the\nthing is that the first debater is\nconvincing the human of that's a lie\nthat it's um there's no sort of\neffective way for the second Invader to\ncounter that right so if the first\ndebater you know says something that is\nuntruthful then it has to be you know\npersuasive even in the face of the\nsecond debate or trying to explain why\nit's untruthful\num but that is quite plausible there's\nlots of situations where humans can be\npersuaded of false things even when they\nhave heard the true argument for why\nthat thing is is in fact false\nokay yeah question\nyeah these babies similar but yeah I\nwould expect that\nthis will train them to be\nin to say things that humans and wanting\nto hear like it's it halves a little\nthat they debate each other maybe there\nare some topics when we can change our\nminds but this team still seems very\nscary especially anything even a little\nrelative to politics like I imagine that\nnot necessary yeah the gradient is not\nnecessary towards to remove spot towards\nthought the the thing that is easier to\nargue for easier is the rq4 which might\nnot be the truth\nuh yeah I just agree I think that that\nis absolutely correct I think that my\ninclination is also that this is a\nreally really tricky thing to get right\nbecause\num there is such a large difference in\nmany cases between things that are\nconvincing to humans and things that are\nactually true\num and so you know given that you know\nyou you have just a human doing this\nvaluation at the end of the debate yes\nin some sense the human doing evaluation\nhas the help of The Debaters because\nthey're each trying to explain why the\nother debater is wrong but um you know\nit does seem you know absolutely the\ncase that there might be many ways for\nthe debater to sort of trick and you\nknow uh humans various different ways\nthere are some in fact even some nice\nconcrete examples of ways in which this\ncould happen in an AI debate\num the sort of the obfuscated arguments\nproblems maybe the sort of most\ncanonical of these\num I'm not going to go into a bunch of\ndetail on what that looks like but there\nabsolutely are cases where things can\nget very tricky for you know actually\nyou know the sort of limit here being\ntruth now there are some things that you\ncan do right so I mentioned stuff like\ncross-examination they can sort of help\nyou know alleviate some of these issues\nbut\num it definitely doesn't it's not clear\nyou know how much you can sort of solve\nuh just by changing the sort of basic\nparameters of the debate\nokay so so we'll go through sort of\nagain you know a quick analysis here so\num you know the outer alignment sort of\nquestion here is again just sort of this\nbasic question of is the first debater\nsort of incentivized to be honest right\num because the thing that we get here\nthis thing we're trying to get is we're\ntrying to get a model that sort of says\nthe thing that is\num has the property that it is sort of\nthe most convincing argument right it is\nthe thing that if the humans saw you\nknow whatever the possible you know most\nyou know best possible answers that\nargument would be the human would still\nbelieve that that individual argument is\nthe best in that first argument so\nthat's the thing that we're sort of that\nwe were sort of hoping that we get the\nfirst Invader to always answer with the\nfirst Vader always answers with whatever\nthe most convincing thing would be you\nknow modulo whatever the answers to that\nthing would be\num and we're hoping that thing is in\nfact truthful and honest and helpful\nthat you know whatever the most\nconvincing thing is in that situation is\nin fact the you know the most truthful\nand helpful thing and of course there's\nyou know lots of reasons to believe that\nmight not be the case there's lots of\nsituations which you know maybe humans\ncan be convinced of things that um\nare you know not in fact true uh and so\nyou know in that situation we would sort\nof you know have some outer alignment\nissues but the goal at least would be to\nget a model that is in fact saying that\nsort of most uh persuasive thing and\nthat most persuasive thing would be true\nokay and then of course there's also the\ninner alignment question of how do we\nactually even guarantee that we get that\nmost persuasive thing at all because you\nknow we have this sort of you know\nindividual setup here where we have you\nknow these individual Debaters debating\nagainst each other um there's a lot of\nweird things can happen so you know you\ncan have things like if you have two\ndeceptive debaters and you know even in\nsome sense you know the deceptive\nDebaters could you know maybe use\ntransparency tools to inspect each other\nand realize that the other one's\ndeceptive and use that to win the debate\nbut if they're both deceptive they can\njust agree not to do that and then you\nyou never discover that they're\ndeceptive and you just end up with a\ndeceptive you know equilibrium right so\nsimilarly you know to how we've been\ntalking about previously there's an\nequilibrium selection problem where you\nknow it is absolutely the case that you\nknow the sort of limit of this procedure\ncould just be you know two Debaters that\nare both you know just trying to trick\nyou uh that you know never reveal that\nfact\num\nyeah what's the question here\nmaybe that I don't understand this and\nmy impression was that this is a sort of\nzero-sum game between two Debaters at\nthe end we decide we jumbled right and\ngive rewards and losses accordingly I\ndon't know how to do zeros people put\n80s could cooperate in this Zero Sum\nyeah so it is a zero sum according to\nthe rewards but\num\nis your common coin\nactual objectives right so you could\nhave models and those models are\ndeceptive you know maybe they care about\nsome long-term goal eventually in the\nworld and according to that long-term\ngoal it might not be a zero-sum game\nanymore right they might be willing to\ncooperate in right now in training uh to\ntry to eventually get some other thing\nthat they might want in the real world\nlater right so even though in theory we\nhave set up a zero sum game and so the\nagents shouldn't be cooperating with\neach other there's no reason necessarily\nto expect that the actual algorithm\nwe're going to get the models to be\nimplanting is going to you know actually\ncare about that uh you know that that\nfact right and so it could absolutely be\na thing that is willing to cooperate for\nreasons other than you know just winning\nthis debate game\nYeah question\nso if I'm understanding this all right\nthere's like long-term goal thing\ncorrectly is it like a case of\nthese two deceptive Debaters ask okay\ngiven these two possible answers to the\nquestion\nwhich way would I prefer to be gradient\ndescended towards in the future and then\nthey both just agree to let that side\nwin\nthey all I guess they both individually\nincentivized to let that side win there\ndoesn't need to be an actual agreement\num so that would be one example of a\nsituation in which you could have you\nknow deceptive Debaters doing something\nweird it's definitely not the only thing\nthat you could have they would be weird\nlike that I mean so a really simple\nthing would just be well if you have\ndeceptive beaters and they like both\nwant some you know weird thing\neventually in the world then they could\njust play the debate game exactly\nnormally except when they try to do the\nthing where they inspect the other\ndebater and see whether the other\ndebater is doing the right thing\ninternally they just you know gloss over\nthe part where the other debater is\ndeceptive like that right that's that's\nalso you know another plausible thing\nwhere they're not necessarily thinking\nreally critically about exactly what the\ngradient signal would do they're just\nsort of you know not doing the thing\nwhich we need which is effectively you\nknow overseeing and understanding\nwhether the other model is doing the\nright thing\num there's lots of versions of this but\nI think the point the thing that I want\nto point out is just that it is\nabsolutely the case that there are weird\nequilibriums here right where you could\nend up in a situation where you have a\nmodel and that model is doing the wrong\nthing and because it's doing the wrong\nthing it results in the debate going in\nsuch a way that reinforces that wrong\nthing\num\nand so it doesn't need to be the case\nthat you actually end up with you know\nthe sort of equilibrium that we want\nwhere it's you know always outputting\nthe most persuasive argument and of\ncourse it also might be the case the\nmost pervasive argument is not even good\nat all but you know we sort of have\nagain you know both of these problems\nright we need to be the case that you\nknow the most pervasive argument here is\ngood and also that\num you know the we actually end up in\nthat desirable equilibrium\num\nokay and then again we have the sodium\nnotation competitiveness\num I think that you know again this is\nsort of a this is a sort of game that we\nare at least you know often well\nsituated to be able to train models to\ndo this is sort of very similar to the\nsort of way that we will train systems\nlike alphago where you can just sort of\nhave a zero sum you know RL game you can\ntrain via self-play\num and then you know you know training\nmodels do well on the game\num\ncompetitiveness is um\na little bit you know tricky here I\nthink the sort of idea of this sort of\napproach is we're saying well\num you know by able to\num buy you know having the you know all\nof these various different arguments you\nknow that we're sort of spelling out the\nwhole thing then it is the case that you\nknow we can have really good evaluations\nright so we think about the spaceship\nexample from earlier where well we're\nsaying you know isn't in fact the case\nthat by about you know providing this\nevaluation signal and then maximizing\nthe evaluation signal we get a model\nthat's actually able to do you know\neffectively do the sort of task that you\nwanted to do\num you might hope that it's easier to\nevaluate you know how good the spaceship\nactually will be if you have access to a\ndebater that's trying to critique the\nspaceship and you know point out all the\nflaws in it\num but it's not clear right you know it\ncould totally still be the case that you\ncan you know one of The Debaters can\nmake a really compelling case for you\nknow why the spaceship would look great\nand in fact you know would would fail\nand it would be better if you just had\nthe model you know in fact directly\nthinking about how to build it rather\nthan just trying to you know produce the\nthing that would you know be the most\npersuasive\num and so that's absolutely still you\nknow a competitiveness problem here as\nwell\nokay great\num\nokay so next up we have a sort of\nvariant on the uh debate system uh the\nsort of on AI safety via debate that is\nin many ways very similar but also a\nlittle bit different this is the market\nmaking idea\nso what is market making so Market\nmaking uh is sort of similar to the\ndebate game where we're still going to\nhave a sort of debate structure where we\nsort of alternate between you know\nmodels making various different\narguments but we're not going to have\nthis sort of two player structure\nanymore where we have like you know two\nDebaters arguing for two different\npositions\num instead we do something like this we\nhave a question we feed that question to\nuh a a market a sort of Market model\nwhich says you know here are all the\ndifferent possible here's a distribution\non what I think the human is going to\nanswer to this question after this whole\nprocedure\num and then there's a sort of adversary\nthe sort of uh in the in the game which\nthen sort of provides some additional\ninformation and that additional\ninformation updates the market uh and\nthen we sort of repeat this and then\neventually the human gets to look at all\nof the information provided and sort of\nproduce their final answers this is a\nlittle bit weird so let me sort of try\nto uh you know under explain what's\nhappening here so the idea is that the\nfinal answer you know the human is\ntrained at the end to um\nproduce you know the human produces some\nfinal answer by looking at all of the\nvarious different information that's\nproduced in the transcript in the market\nin each individual point in time is\ntrained to imitate what the human will\nsay at the very very end of the whole\nprocedure so after the human has seen\nall of the information provided in the\nprocedure the market is trained to\nimitate what the human will say at the\nvery end so you can think about this\nright as the human's Judgment at the\nvery end in the debate game right\nthere's a whole transcript of you know\nsomething that's happening of you know\nvery similar Debaters going back and\nforth and the market is trained to sort\nof predict at the end of that transcript\nwhat will the human say what will the\nfinal evaluation of judgment be of the\nhuman\nnow the way that we generate that\ntranscript here is different than the\ndebate game so the way that we generate\nthat transcript is each individual point\nin time the market right is making\npredictions about what the human will\neventually say and the adversary is\ntrained to produce whatever information\nwill most change that prediction\nwhatever information will most cause\nmost move the market well most sort of\ncause the Market's prediction about what\nthe human will say at the end to be\ndifferent than what it was previously\nand so the idea here is to sort of\napproximate something like the limit of\nwhat the human would think after they\nhad seen all possible arguments\nso in the debate game we were finding\nsomething a sort of equilibrium that was\nlike what would the humans say after\nthey saw the two best Arguments for a\nparticular you know thing right the best\nargument in favor you know the best\nargument against in this case we're\ndoing something a little bit different\nwe're saying what would a human saying\nafter the human had seen all possible\narguments because each individual point\nin time the adversary is trained to\nproduce the information which when given\nto the human at the end of the\ntranscript would most move the market\nright would most cause the humor to say\nsomething different than what the market\nis currently predicting the human would\nsay and so in the limit if you have an\nunexploitable Market if you can get to\nthe point where the market knows exactly\nwhat the human you know is making a\nprediction such that there's no\ninformation that you could provide to\nthe human that would change that\nprediction then that market is in some\nsense approximating the human's judgment\nafter seeing all possible information\nokay that's the sort of idea here and so\nit's very similar to the debate game in\nthat we have a situation\nwhere there is a sort of debate\ntranscript right there's a bunch of\ninformation that is fed to a human and a\nhuman makes evaluation but rather than a\nsort of two-sided debate we have in some\nsense a sort of infinite side of debate\nwhere it is just a market that is\npredicting what human will say and a\nsort of Trader in the market that is\ntrying to provide whatever information\nwill move it to the greatest extent\npossible okay this is a little bit\ntricky I think in many ways this is sort\nof a maybe Advanced version of of debate\nthat sort of fixes some problems so\nstill has many of the same issues\num but is is nicer in some respects it's\nvery related to debate does this sort of\nmake sense yeah questions\nso just to be clear the\nat the end you do get to see not just\nwhat the market said what the adversary\nsaid as well right that's right so the\nthing that the human Sees at the end is\neverything that the adversary said in\nthat particular transcript right so the\nmarket starts out with some prediction\nthe market is like based on my knowledge\nof you know how these sorts of you know\nthings usually go on what the sorts of\nthings that a human might see here's\nwhat I think the you know the humans you\nknow final judgment will be and then the\nadversary tries to you know look at that\nmarket and see if there's anything the\nmarket hasn't taken into account any\nindividual piece of information it could\ngive to the human they would change what\nthe human says relative to the market\nand then provides that information and\nat the end the human gets to look at all\nof the provided information and produce\nsome judgment and then you know we sort\nof iterate by training the market on\nthat new provided information\nYeah question\nso let's try and go through an example\nof the difference between this and\npivate let's look at this we bet this is\na hard example when you the human is the\nPope in the 17th century and this is the\ncopernican debate and originally he's\npretty biased in favor of the Dilemma in\nbu and in debate\nhe hears equal amount of arguments from\nSimply to you and Galileo and then\ndecides and here originally the market\nis 99 percent that the pope will say\nthat follow my but then the adversary is\nmotivated for a long time to produce\narguments on The copernican View because\nit's easier to move to the 99 percent\nDirection than the wall so that it\nconverges so then he hears a lot of\nGalilean arguments gets close to 50\npercent then the adversary starts to\nmove him back and then what happens\nuh yes I think that the way you're sort\nof analyzing this is based is sort of\ncorrect where like you know if the\nmarket starts very convinced the human\nwould say one thing then you know\nthere's a lot of opportunity if there\nare sort of good arguments in the other\ndirection for the adversary to try to\nyou know change that and push it back\nand so if you start you know with this\nyou know uh you know you know incorrect\nview of the solar system then maybe\nyou'll start providing some evidence in\nthe other direction until you know it\npushes you know if those arguments are\nin fact convincing they will start you\nknow pushing the the model in you know\nthe market and the human you know in the\nother direction If eventually the human\nyou know or and the market which is you\nknow a proxy human here gets you know\ncloser to 50 and maybe yes it'll be you\nknow it's very unclear you know then you\nknow which arguments are more convincing\nThe Hope like debate would be that the\nyou know the convincing arguments you\nknow or at least the ones that are\nconvincing after you've seen all of\ntheir responses are the true arguments\nright that you know\num yes maybe then the adversary will\nflip and we'll say some false thing\nabout the solar system but then it'll be\nreally easy for the next adversary to be\nlike okay now that you've you know\npushed back in this direction you know I\ncan just you know refute this because\nhere this you know a bunch of you a\nbunch of information and in the limit\nthere should be no reason for the\nadversary to ever say things that will\nresult in uh that will be easily refuted\nbecause if the adversary says something\nit moves and then the you know the next\nadversary can easily reply with no this\ndoesn't make any sense because of this\nthen the market shouldn't even move on\nthat information because the market you\nknow should you know if the Market's\nsort of understanding what's going to\nhappen at the end of this procedure\nshould see that the adversary has made\nsome particular claim that is easily\nrefutable it knows the next adversary is\ngoing to refute that claim and so the\nmarket should say well this this\ninformation is going to move the human\nat all because it's just going to be\neasily refuted that's the idea and so\nthere's sort of the limit of this\nprocedure right we think about what an\nunexploitable Market would look like\nright you know limiting in the same way\nthat we've sort of thought about hch the\nlimit of an unexploitable Market is a\nsituation where there's no information\nthat you could provide that would shift\nthe human's beliefs relative to that\ndistribution that if a human you know\nbelieves in that in the distribution of\nthe market is predicting no additional\ninformation that any adversary could\nproduce would shift those beliefs\nand so again that's very similar to the\ndebate equilibrium where it is you know\nthe sort of arguments that would be sort\nof most persuasive to the human you know\nregardless of what the most you know\nconvincing counter argument that they\nsaw but it's a little bit more General\nrather than a single counter argument\nwe're looking at sort of all possible\nyou know piece of information that could\nbe provided to the human\nokay yeah questions uh so I'm confused\nabout what the market looks like in this\ncase if I usually I think of a market as\nmade up of pigeons making Trace like\nwhat would citations be here and what\nare they treated yeah so Market is a bit\nof a loose analogy\num so I think that the way you can sort\nof think about this is that\num\nuh in some sense the thing that is\nhappening here is that the the the way\nthe market analogy goes is that the\nadversary is sort of structured in such\na way where they are incentivized to\nproduce the information that would most\nallow them to sort of make money in the\nmarkets um and so if you treat this\nmodel distribution as a market as a sort\nof like uh you know if it were a market\nit's not a market in this case it's just\na single individual model predicting\nwhat the human will say but if it were a\nmarket and you were to imagine it\nanalogize it to a market then the things\nthat the adversary are incentivized to\nsay are exactly the things that would\ntheoretically make the adversary the\nmost money by saying those things in the\nmarket and so that's the sort of analogy\nhere\num you can sort of think about this in\nsome senses like Market manipulation\nwhere it's like what are the things that\nthe adversary could say the sort of you\nknow words that would be useful to\ninterject into the world such that it\nwould you know allow them to make the\nmost money trading you know Insider\ntraining on the market that's the sort\nof thing that's happening here where the\nadversary is able to produce you know is\nproducing information which most which\ncreates the largest market shifts that\nthey can they can anticipate and then\nprofit on of course it's not actually a\nmarket and it's not actually a Trader\nbut I think the analogy can sometimes be\nuseful for understanding What's\nHappening Here\nyeah past the mic\nhey I'm\nuh confused about like the training\nprocedure here in more detail like when\ndoes the adversaries like we're like\ntraining this Market on like some actual\nretro human outputs it Facility have to\nhappen after like a finite adversary\nsuggested it's\num even if you know maybe like the\nbillion suggestion would move the human\na little bit so before we have to like\nstop or something and I'm confused like\nwhat\nhow long do we run this Loop for when\ndoes the adversary stop how many\nsuggestions does the like Market did to\npredict the adversary recommends yeah so\nyes it's a really good question I think\nit's quite tricky so the thing that\nyou're sort of hoping here is that well\nyou reuse the the market you know over\ntime right and so as the market learns\nthe sorts of things that the adversary\ncould say that would result in Easy\nresponses like that they would easily be\nrefuted and the sorts of things the\nadversary could say that would result in\nthe human actually believing it the\nmarket will get better and better about\npredicting you know okay if the human\nwere actually able to see a bunch of\ngood information this is the thing that\na human would believe\num and so the market sort of could\nshould converge in some sense like I was\nsaying previously to with something you\nknow something unexploitable something\nwhere there's no information the\nadversary could provide they would you\nknow shift the Market's prediction and\nan unexploitable Market should have the\nproperty that right what it means for\nthe market to be unexploitable in this\ncase what it would mean for a dispution\nover what the human says at the end to\nhave no information no nothing which the\nadversary could do which would move the\nmarket what that means it means that it\nis a distribution over what the human's\nbeliefs are such that no additional\ninformation would change this place\nright and so in some sense we're sort of\nhoping that the equilibrium here the\nthing that the market should in some\nsense converge to if it's sort of the\ntraining is doing the thing that we want\nit should converge to something which is\nan approximation of the the sort of you\nknow that thing I've been saying right\nthe like equilibrium of the human's\nbeliefs after seeing all things now it\nis a little bit tricky because of the\nsource of you know path dependent\neffects like you're saying where well\nit's very unclear you know what is\nhappening over individual you know runs\nright like in some sense well the market\nis sort of getting a little bit closer\nto what the human would say each you\nknow human really think after seeing any\nindividual run with the adversary says\nsome information but at each individual\npoint in time the market is sort of only\nexpecting the adversary to say finite\nnumbers of things\num and so that can be really tricky\nbecause well maybe the adversary said\nthat maybe the market believes that\nthere is some theoretical distribution\nthey would be unexploitable but is never\nachievable by any finite number of you\nknow things that the adversary could\npossibly say\num and then you know maybe you won't\nconverge to that\nI'm not going to go too much into diesel\non how you might solve this I think that\nproblem is is solvable and I discussed\nit in more detail in the like actual\nthing on this\num the thing I'll say very briefly is\nthat the way you solve that problem is\nyou give the adversary the ability to\nexhibit things that the market says on\nother inputs as one of the things the\nmarket one of the things the adversary\ncan provide and that allows you to\nsimulate infinite depth\num without actually going to infinite\ndepth but it's but I don't want to go\ninto too much detail on that but suffice\nto say if you are only interested in the\nlimiting Behavior I do think you can\nsolve that problem but of course the\nlimiting Behavior like we've stressed is\nnot necessarily you know indicative of\nwhat it would actually do in practice\nokay yeah question\nthanks\nperhaps more basic question on what sort\nof questions do we expect debate to be\nuseful for\num I can imagine a case where we would\nwanted to like the model to debate some\nsort of scientific claim uh which uh if\nyou were to come up with good arguments\nyou would need experimental evidence for\nwhich you can't get because you're just\nthe model that maybe doesn't have a\naccess to new to physical reality act to\nrun these experiments so\num yeah the the stronger model that just\nor one model might win the argument that\nhas stronger arguments just because the\nother one that is actually right might\nnot have the experimental evidence or\nsomething that does it so yeah I\nconsider it there it might be a class of\nquestions that is just not suitable for\nus and I wonder whether you're flawed\nabout which sort of questions are\nsuitable for this and which one's armed\nyeah great question so\num so one thing I'll say is that you\nknow I've sort of been mentioning you\nknow a lot of these different approaches\nsort of applicable different situations\nI totally agree that there's going to be\nsituations where like a lot of the\napproaches we've been talking about you\nknow today and even previously was sort\nof predicated on this sort of question\nanswering setup that the idea is well\nyou know the thing we sort of most\nreally want out of our apis you know for\nto be able to take you know individual\nquestions and answer them truthfully and\neffectively now in some sense you can\nsort of take almost any problem and sort\nof you know phrase it as a question\nanswering problem you know even you know\neven a problem of you know trying to you\nknow directly act and accomplish some\ngoal in the world can be you know\nphrases what is you know a helpful way\nyou know useful you know thing for me to\ndo to accomplish this this task\num but it is totally the case that you\nknow well it's not clear that for a lot\nof the sorts of things that you might\nwant to train in AI systems do this is\nsort of the right frame or a useful\nframe so you know especially in\nsituations where you know you need to do\na bunch of you know in you know direct\nsort of you know back and forth with the\nworld like if you're doing experiments\nor if you're doing something\num you know maybe you're just like\nrunning a factory right you know it can\nbe really difficult you know potentially\nto sort of get a model to do something\nlike this now I think that that is\nunclear I think there totally are\ntechniques where you could take almost\nany of these approaches and try to apply\nthem in something like this you know you\ncan do you know settings where the you\nknow it gets to like you know you you\nfirst start by asking you know what are\nsome ways to think about this what are\nsome experiments to run then maybe you\nin fact go run the experiments you take\nthe results of the experiments back and\nyou like give them to the model now\nyou're like okay here's some information\nthat you now have like you could imagine\nlike in the market making setup maybe\nthe adversary has the ability to say you\nknow look at this experiment it'll be\nreally useful and then the human can you\nknow go you know run you know say run\nthis experiment you know take the result\nback and it gets integrated into the\nthing right so you could imagine you\nknow modifying a lot of these sorts of\nthings to you know add in that sort of\nthing but\num it is quite tricky least um you know\nit is totally plausible that for many of\nthese the sorts of things you might want\nyour system to do and that's sort of in\nthose sorts of cases that this you know\nwon't won't sort of necessarily work\nwork in that in that situation\nokay\num great and so just like adding you\nknow one other thing you know again in\nthe same way as debate we can also add\nyou know oversight we can say that uh\nyou know\num I'm actually I'm not gonna go into\ntoo much detail on what oversight looks\nlike here it's basically a sort of\nversion of the same way that that you\ncan do oversight uh and\ncross-examination and stuff into me\num but suffice to say it's very similar\nyou know you can have situations where\nthe models are trying to sort of inspect\neach other and provide that produce that\nas evidence\nokay and then I'm going to go only\nrelatively briefly through this because\nit's very similar to the debate setup\nbut basically you know again the goal\nhere is we sort of want the model to be\npredicting what the human would think\nafter seeing all possible arguments\nand we're sort of relying on it being\nthe case that um you know there's no\nsort of uh you know that all of the sort\nof uh you know untrue persuasive things\nthat you could be said there is some way\nthat you know some you know other in\npieces of information the adversary can\nproduce that would sort of you know\nexplain to the human why that thing is\nuntrue and would give you know the human\nyou know actual true beliefs and so uh\nwe're sort of again relying on this very\nsimilar version of the you know in the\ndebate claim uh in debate case we're\nreally relying on being the case that\nyou know the sort of most persuasive\nthings really end up being you know the\nmost true\num and then again we're sort of relying\non this sort of uh you know oversight to\nsort of help us here there's maybe some\nreasons that you might expect that\nyou're less likely to get something like\ndeception in this case um one thing\nthat's nice compared to debate in this\ncase is that the adversary uh as opposed\nto the debater he's sort of not trying\nto you know accomplish some goal over\ntime steps so the debater in the debate\ngame is sort of trying to get you know\nit's trained to get you know reward in\nthe debate game over many individual\ndebate steps and here that's not the\ncase the adversary is just trying to do\neach individual piece of information\nproducing uh however I that's not a hard\nguarantee at all it totally could be the\ncase that you still end up with a model\nthat actually has a long-term objective\num you know and is deceptive in any sort\nof way um despite the fact that you're\nyou know you're only training on an\nindividual one-step thing yeah question\nso using the copernican example from\nbefore let's say the header I'm the pope\nand the adversary gives me the\ninformation of if you believe the uh\nheliotropic Theory then\nyou are probably going to be kicked out\nof Pokeman they're probably going to\nburn you with the stake you should\nprobably not listen to anything else\nthat any of us have to say in case you\nactually wind up believing that and then\nare they able to lie to people\nconvincingly let's say that is entirely\ntrue\nwould that model still be aligned in\nthat case by telling me this fact\nuh I mean I think it's very unclear it\nsort of depends on what you want it\nseems like probably we don't want that\nright like that there's something that\nthe model can say to the human that like\nyou know causes the human to you know be\n100 set in their beliefs and those\nbeliefs are false and they can never be\nconvinced otherwise it seems like that's\nprobably not what you not what you sort\nof want to have happen in the situation\num maybe it is you know maybe like\nyou're saying there's a situation where\nlike you know actually you know\naccording to the Pope's values this is\nreally the thing that the pope should do\nbut um\nI don't know I mean it depends on sort\nof what you're going for I think that\nprobably we would we would want to not\ndo that\nokay uh okay so one last proposal to\ncover that uh is uh this imitative\ngeneralization idea so uh this is also\nsometimes talked about as learning the\nprior\nso what is this so um\nwe're again we have this setup where we\nhave a human and the human answers\nquestions\num and similarly to the amplification\ncase the human now has access to\nsomething to help them answer those\nquestions in this case that thing is\nsome sort of model uh Slash information\nit's some sort of very large database a\nbunch of information that helps the\nmodel maybe a large Cloud that helps the\nhuman answer the question maybe a large\ncollection of models maybe just an\nindividual model it's some you know\nthing which is extremely useful to the\nhuman in answering the question\num\nand then uh we sort of we want to\nproduce this thing we want to have\nsomething which is really helpful for\nhumans that effectively lets them answer\nyour whatever questions we want as\neffectively as possible and the way that\nwe're going to train that thing is well\nwe want it to be the case that whatever\nthis thing is this sort of information\nthat we provide to the human that that\ninformation when given to the human in\nfact results of the human making good\nanswers to questions so we can you know\nin fact ask questions you know see\nwhether those answers are correct so you\nknow and trained to have the you know\nhave this information be the case that\nwhen fed to the human in fact results in\ncorrect things and also we want it to be\nthe case that whatever this sort of set\nof information is that is presumably\ngoing to be represented in some model\nthat that information is plausible a\npriori according to the human\num and those are the sort of two things\nwe're trying to train this thing to do\nso you can think of this thing as sort\nof a set of information but effectively\nwe can think of it as just a model and\nwe're training that model on two things\nwe want the sort of information\nrepresented in that model the sorts of\nthings that it says to all sort of be\nplausible according to a human and to In\nfact when the human has access to that\nset of information in fact result in you\nknow correct answers on you know all the\nthings that we can check\nokay and the reason we might like this\nthe sort of fear the theoretical sort of\ngrounding behind this sort of a thing is\nthat we're sort of approximating\nsomething like a uh prior and an update\non that prior so we're saying well uh we\nhave some prior plausibility on\ninformation that is like you know How\nlikely is this operatory and then we\nhave some likelihood that is like well\nupdating that prior plausibility on the\nuh you know how to what extent that\ninformation in fact does a good job\nthose hypothesis actually does a good\njob at predicting the real things that\nwe've seen in the world we want to\nupdate the things that do a good job of\npredicting the world and down weight the\nthings that do a bad job and so we're\nsort of trying to mimic that updating\nprocedure you know what if a human had\nthe ability to actually just update on\nall possible information uh you know and\nand come to some conclusion well we can\ntry to mimic that sort of a thing that\nsort of human uh you know prior by\nsaying well what is the most what is the\nthing that would be most plausible\naccording to human and that would result\nin the best answers you know the prior\nand the likelihood\nlots of questions\nso I like this this solved several\nproblems with debate like for instance\nthe example I just did before with the\nburning of the stake thing because in\nthis case the we want the the human\nwe're going to be training the model on\nnot just the human updating the human\nsaying where it's true\nbut in that case how do we determine\nwhat is true in the first place where\ndoes the accuracy loss come from\nyes I think this is extremely good\nquestion uh and it's very tricky so I\nthink it has to sort of come from\nwhatever information you have about the\nworld right so any individual situation\nwhere you can make some prediction about\nsomething in the world where you can\ngather some information uh uh where you\nknow you can make some concrete\nprediction then you can use that as\ninformation to update your hypothesis\nright we're trying to get at something\nlike what would the humans beliefs be if\nthey had the ability to update on all\nthe information available in the world\nanything they could ever observe and so\nwe're trying to say is we're like well\nyou know there are such we can you know\njust gather a data set of you know just\nmaking predictions about the world\nsituations where you can say well here's\nsomething that happened and then\nsomething can happen next\num you know and if you can successfully\nmake all those predictions you know if\nhypothesis expressly explains all those\npredictions\num you know then it should have a really\nlarge update in favor of that hypothesis\nand so that's sort of the sort of thing\nright we're just saying well anything\nabout the world that we can collect any\ndata anything that we can predict about\nthe world all of the information that we\nhave access to you know those are all\nthe things that we want to be updating\non\nbut I still don't get where the accuracy\nloss comes from like let's say the\nquestion is is it day outside\nit does the model somehow know if they\nstay outside or is the truth coming from\nwhat the human comes says is true at the\nend of this process\num so so we just come from something\nthat we've collected so maybe we've in\nfact collected a bunch of examples of\nsituations in the past where it has been\nday or hasn't been night based on you\nknow some information and then in that\nsituation we would say you know can you\nin fact predict all these situations\nsuccessfully\num you can even you do this in a sort of\nunsupervised way you just gather\narbitrary information about the world\nand then train to predict some set of\nthat information from some other side of\nthat information because we're just\ntrying to basically approximate you know\ndo does the hypothesis make good\npredictions about the world and so any\ninformation that we know about the world\nwe want our hypothesis you know to in\nfact be making good predictions about it\nbut if we reliably have these facts\nabout the world why do we need this\nwhole thing why can we just use the\nfacts well because we want to get new\nfacts about things in the future right\nso situations where we don't necessarily\nhave the facts yet we want to get a\nprediction about it right so you know\nfor example we might know you know in\nfact what happened you know in 2023 but\npredicting what happens in 2023 given\nwhat happened in 2022 is extremely\ndifficult and something that would be\nvery valuable and so we can try to you\nknow you know get a thing which is\nmaking those sorts of predictions by you\nknow finding the hypothesis that best\nexplains what actually happened in 2023\nand is you know most likely according to\nthe human prior\nbut so maybe I say they will not be an\nH1N1 pandemic in 2023. what do we judge\nthe accuracy of that statement on based\non the loss yeah so you can't judge the\naccuracy of the like new statement that\nwe have like no previous information you\nknow to guide it right we're saying with\nthe thing that we're hoping for is that\nthis procedure results in a model which\nin fact makes good predictions about\nthings because it finds the set of\ninformation that results in the best\npredictions in the past and is the most\nplausible right so you know in in all of\nthese cases you know we have no ability\nnecessarily to you know the thing we're\ntrying to do is get a model which is\nable to produce good effective answers\non some new data that we haven't seen\nbefore and the way that we're trying to\ndo that here is we're trying to say well\nwhat is the sort of model that you know\nwould if we're thinking about it as like\na hypothesis that would be you know that\nwould best explain the data we've seen\nso far and would be most plausible\naccording to a human that's the\nhypothesis that we should be using to\nlook at Future data\nand you know best make predictions about\nthat featured app\nokay\num and so the idea here is that um when\nwe have this procedure right we have you\nknow human has access to some model set\nof information that you know helps them\nanswer questions and then we can just\ntrain some model to you know imitate\nthis this whole procedure right to you\nknow be able to effectively you know\nimitate exactly what the human would do\ngiven access to this this you know most\nplausible information you know uh that\nhas this property that is you know the\nthe greatest prior in likelihood\num and then this you know model we can\nuse to you know as our sort of question\nanswering system\nokay so this is a little bit of a weird\napproach in some sense in some sense\nit's very simple we're just saying you\nknow we want the the thing to be\nplausible and you know we wanted to when\ngiven to a human result in good you know\noutput and then we want to train a model\nto approximate the sole procedure\num but it's also a little bit weird um\nyou know the reason we might hope that\nit's working is that it's doing you know\nsomething like approximating you know\nBeijing inference\num but of course it's very unclear\nwhether it's actually doing that right\nbecause in fact the thing that we've\ndone is just said well we want some\nmodel right uh you know Z is just some\nmodel you know some algorithm which in\nfact results in good performance uh you\nknow on this data set of you know\npredictions and also you know seems\nplausible according to human and then\nwe're like okay and then uh you know\nthat thing when fed to a human you know\nwe just want to approximate it\num\nwe have no necessary guarantee that it's\nactually going to be you know the\nhypothesis that would be the sort of\nthing that would you know be this be\nwhat the human would\num\nyou know actually think if they and you\nknow considered all the positive\ninformation and selected the best\npossible hypothesis but maybe it is\nsomething like an approximation of that\nokay and again we can also sort of have\nsome oversight here but I'm not going to\ngo into too much detail on what it would\nlook like but it's very similar to what\nwe've talked about previously in stuff\nlike imitative amplification\num\nokay and so uh the goal here right is\nwe're trying to produce a model uh that\nis sort of mimicking what the human\nwould you know what hypotheses the human\nwould have after they've been able to\nupdate on all possible information that\nthey could see about the world\num\nI'm I'm not going to talk too much about\nthe properties of this I think it's a\nlittle bit weird and tricky but um you\nknow very briefly\num\nthere are you know some sort of weird\nouter alignment issues here\num it can be very hard to incentivize Z\nto really contain all the correct\ninformation\num especially because there can be\ninconsistencies across individual\nquestions uh or like there can be double\nupdating across individual questions\nthere's a bunch of very sort of tricky\nissues about getting into the right\nthing\num even in the case where you really\nbelieve that it is actually the thing\nwhich has the property that it is the\nyou know most plausible according to the\nhuman and results in the best outputs\num because of the way that you're doing\nthis procedure where each sort of output\nis evaluated independently things can\nstill get a little bit tricky about\nwhether it is actually equivalent to the\nsort of correct Bayesian update and then\nthere's this sort of inner alignment\nissues here as well where you know we we\ndon't necessarily even you know there's\nno reason to believe that you know Z\nwould actually approximate anything like\nthe real hypothesis that we would want\nhere in some sense the sort of only\ndifference between something like this\nand just sort of directly training a\nmodel to produce good answers and sort\nof seem good to a human which would be\nlike the rhf case is we're sort of\nadding a human we're saying well you\nhave to produce good answers such that\nwhen a human has access to you it\nproduces good answers and also seems\nplausible to a human\num but it's unclear how much that change\nactually results in you know helping us\nhelping us find like a better Basin\num it's absolutely still possible you\nknow that we could get a deceptive model\nin this case\num and so it is a little bit unclear\num\nI don't talk too much about the\ncompetitiveness here either it's very\nsimilar\num to um\na lot of the approaches we've talked\nabout previously\num The Hope sort of would be that you\nknow if you can get something like this\napproximation of a you know an actual\nsort of update of the human then you can\nsort of approximate something like you\nknow the best possible judgment uh you\nknow of the human but you're still in\nsome sense limited by what that best\npossible Judgment of the human would\nlook like you know in some ways very\nsimilar to each stage where you're sort\nof limited by what the you know best\npossible thing would be that humans\nwould be able to do given you know\nability to consult with all these other\nhumans\nokay\num so those are all of the ones I want\nto talk about right now there's some\nother approaches that I'm not going to\ntalk about but that are also maybe\nrelevant\num recursive reward modeling is is one\napproach I think that the way that we\nhave talked about approval-based\namplification in this talk though is is\nvery very similar and essentially\nencompasses recursive award modeling so\nwe've effectively dealt with that\napproach there are others that we sort\nof haven't talked about um stem AI is\none\num where the idea would be to sort of\nunderstand you know to just sort of use\nyour models on uh you know individual\nnarrow like mathematical or scientific\ntasks and not try to do any human you\nknow prediction or question answering in\ngeneral at all\num there's other sort of approaches like\nnarrow award modeling we really just\nwant to focus on using models for\nindividual narrow tasks and I'm not\ngoing to go through all of the you know\nother possible approaches but\num hopefully the idea at least right now\nis a sort of give an overview at least\nof the specific you know some of the\nleading sort of approaches and how\npeople are thinking about trying to move\ninto the regime of you know evaluating\nmodels in super human settings right\nwhere a lot of the approach we've talked\nabout previously you know up you know\nprior to today have been you know things\nthat have been really focused on you\nknow more like current models and trying\nto bridge the gap from current models to\nuh you know these sorts of you know\nthings are starting to get to AGI but we\nalso sort of have to also deal with\nthings that are Bridging the Gap from\nAGI and Beyond and so a lot of the\napproach we talked about today are\nstarting to maybe address that thing you\nknow giving us this ability to scale our\nability to oversee our models and\nprovide good feedback you know beyond\nthe point where we can literally just\nthe things that humans can evaluate\ndirectly\nbut they're very tricky right all of\nthese approaches have you know a bunch\nof really tricky you know issues with\nthem and things that really have to be\nyou have to be able to get right to make\nthem work\num\nand so it's very unclear\num\nI think one final thing that I will\nleave with before we do questions and I\ndon't necessarily want uh you don't have\nto give your answer to this right now or\neven ever but I think that a good\nexercise you know maybe a take-home\nexercise in terms of sort of thinking\nabout a lot of the things that I we've\nsort of talked about across all of this\nis just sort of you know at some point\nin time I think that we like as a\nsociety are going to have to make\ndecisions right about what sorts of\nthings we actually you know want to go\nthrough with what are the sorts of\nproposals we actually want to do and\nthese are really really hard and\ndifficult decisions\num and I think in many cases a lot of\nthese decisions will have to be have to\nbe made under a lot of uncertainty right\nnow we have a ton of uncertainty right\nwe've gone through all these approaches\nand our conclusion has been for\nbasically all of them we don't know you\nknow here are some things that might\nwork here are some things that might not\nwork it's very unclear whether they work\num but in many cases it's not clear\nwhether that uncertainty will be\nresolved and so in a lot of cases we do\nhave to sort of end up making decisions\nthat are the best that we possibly can\nunder uncertainty and so how do we\nactually do that you know what decisions\nwould we actually make that would be the\nbest possible decisions under\nuncertainty is the thing that we're\ngoing to really have to Grapple with and\nso I think starting to Grapple with that\nquestion yourself and thinking you know\nwhat would we do given the uncertainty\nthat we currently have is a really\nuseful thing to sort of start\num dealing with and sort of\nunderstanding well what are the\nproposals you know that we would start\nyou know what are the things and there's\nmultiple criteria here it's not\nnecessarily just what is the best\napproach right that is most likely to\nsucceed it's also what is the thing that\nif it fails would be the least\ncatastrophic\nokay and that uh with that I'll we'll\nsort of end here and open it up for for\nfinal uh questions\n[Applause]\nanything else\nwell what was your uh recommended\nproposal be token AI approach we can\nDefine it oh that's that's a good one\num\nvery tricky I think\npersonally currently I think that\num\nyou know we are we are in regime right\nnow where I think it makes more sense to\ndo things like the predictive model\nstyle approach where you know rather\nthan trying to really aggressively you\nknow scale these models and train on\nlike approval signals that we might not\ntrust you know we can try to do you know\nprediction cases where we can trust them\nbut like I said previously I think that\nwill stop working uh you know and so\nit's not a scalable approach but I do\nthink that like if I were you know to\nsay well what would we do right now I\nthink that's the sort of thing that you\nyou'd start you want to start with\num but uh that's sort of a cop-out\nbecause it's not sort of addressing this\nanswer of well you know if we really\nwant to just sort of as we sort of you\nknow start scaling more what are the\nthings that we really need to do to be\nable to you know align these models and\nget them do the right thing even into\nthe you know highly superhuman regime\nand then it starts to break down even\nmore you know and I don't know if I have\na I don't I don't have a really good\nanswer I think that there's some things\nthat we can sort of analyze as like you\nknow convergently useful so in a lot of\nthese approaches we talked about today\nyou know stuff like having good\noversight tools having good transparency\nis extremely important so we can at\nleast prioritize you know particular\nresearch directions that are likely to\nhelp with those sorts of things\num\nand you know I do have like you know\nsome preferences you know some of the\nsorts of proposals here that I like\nbetter some that are like worse\num\nI I tend to be you know in favor of\nthings like imitative amplification\num Market making is one that I I came up\nwith and so I you know have some amount\nof attachment to it so I think it has a\nlot of issues\num similar to debate\num but I I don't I certainly don't have\nan answer\nI also think there's a lot to be said\nfor microscope AI if it's possible but\num we we'd have to actually succeed on\nbeing able to do a lot of very\nsuccessful transparency you know to be\nable to do it and I think that that is\nat least currently you know not\nsomething we're really succeeding on\nthough you know like I mentioned you\nknow it seems something that seems like\nfor a lot of these approaches extremely\nconversionally useful and so if we you\nknow we're able to succeed on that more\neffectively then it would unlock a lot\nof possibilities\nYeah question yeah\nwell so a lot of these approaches rely\non or they started where we fail to\ntrust our\num feedback signals\num and for something like reward or like\nsomething like rlhf the most common\nthing is to provide binary feedback\nwhich is a read like inefficient use of\nhumans to provide feedback I mean if I\nwere to give feedback on this talk I\nwouldn't say thumbs up thumbs down that\nwould probably just like get into the\nweeds of like what I liked or what it is\nlike what I disagree with where I was\nconfused and that can be done by means\nof needling language or some other adult\nform of communications\num\nhow\nmight we\num break down I guess my question is has\nsomebody looked into how we can provide\nbetter feedback and is that an Avenue\nthat is fruitful in your opinion\nyeah so yeah good question\num in terms of providing non-binary\nfeedback this is absolutely a thing that\nyou can and has been done with current\nmodels so Ethan Perez has a has a paper\non this looking into how you can provide\nnatural language feedback\num that um they can be quite effective\nin a similar you know way too binary\nfeedback in our life so I don't think\nit's the case that like we only do\nbinary feedback currently\num there are absolutely ways that you\ncan do you know more um you know\ndetailed feedback than that so it's\nunclear whether you know I think in some\nways you should sort of think about that\nas it's not clearly making the feedback\nbetter it's just making it more\nefficient right you could have gotten\nall of that information by doing binary\nfeedback but the binary feedback is very\ninefficient because you have to have a\nlot of examples of slight tweaks you\nknow to get all of the information of\nthe minor feedback you know out of the\nbinary feedback and you can just sort of\nget a lot more information out of the\num the language you act but it's not\nclear that that's actually making the\nfeedback better right like in situations\nwhere the human is in fact just confused\nand like is saying the wrong you know it\nhas incorrect beliefs about whether the\nthing is good or not then binary or\nlanguage feedback or you know getting\nmore detailed feedback from the human\ndoesn't help because that feedback is\nyou know incorrect and so it doesn't\nnecessarily make the feedback better\nthough it does make it more efficient\nwhich can help you you know maybe get\nmore feedback but again you know getting\nmore feedback is only so helpful as long\nas that feedback is good right and so\nthe sort of key problem you know is not\nnecessarily just the the you know\nquantity of feedback but the quality\nfeedback you know the ability to\nactually believe that that feedback\nisn't is in fact correct the human is\nactually understanding what's happening\nto provide you know good feedback there\nbut is I mean in that particular\napproach I worked on this during the\nsummer as well would if from the end of\nthe generally come yeah\nin that particular approach\nthe for the what model what what we did\nwas basically like take the language uh\nor like we had a model who prompted it\nto the greater summary and then get some\nfeedback same model and ask it to\nrewrite the summary\nreward multi to Vault\num and so in that sense it hasn't yet\nbeen used for RL HF and to like have\nmore\num of the sort of\nfeedback that is richer\num I agree with you that to some degree\nit's just making the process more\nefficient as in like instead of giving a\nton of thumbs up and down they can just\nlike provide one with another sentence\nthat has more along the information\ncontent I disagree however with the\npoint where\num you said that\nwe can't clearly communicate the\nconfusion like I guess in natural\nlanguage I can say well I'm not quite\nsure how to give you feedback on this\nbecause I'm confused whereas with thumbs\nup from sound you can't do that even\nwithin the linen where you can just like\nuh you know\nthe internet feedback on the my new\ndetails of like how after the behavior\nof the agent is to be read\nI think so first I I generally will\nthink of\num a lot of those sorts of processes\nyou're you're training on feedback of\nsome variety and then getting the model\nto sort of score well on that feedback\nas relatively continuous and so I don't\ndo that much differentiation usually\nbetween like was there a preference\nmodel or was there not a preference\nmodel I think you can differentiate\nbetween those things and so sometimes it\ncan make sense and the details can\nmatter though I think that oftentimes\nthey're not they're not that important\nin terms of just like the overall\nalignment properties but but sometimes\nthey can matter but that's why I refer\nto it as as as RL HF\num in terms of the sort of concrete\nquestion of you know communicating\nconfusion I totally agree and I think\nthere can be cases where the human is in\nfact confused and you're able to sort of\nyou know address that problem being able\nto communicate the confusion I think\nthat the issue remains however that\nthere are situations where you know for\nexample the human doesn't know they're\nconfused right that the human thinks\nthey are giving correct feedback they\nthink they understand what's happening\nbut in fact the human is incorrect human\ndoesn't understand what's happening\num\nand in that situation you know there's\nyou know we sort of need something else\nother than the human to you know help\nthe human or somehow produce you know\ngive the human more information to give\na more informed response because if we\nare only limited by the human's ability\nto understand and evaluate then we are\nyou know fundamentally bottlenecked by\nthings that humans can effectively\nevaluate and they're going to be\nsituations where\neven when you know also where humans\ncan't always know whether they're\nevaluating effectively where you know if\nwe're you know yes we can try to limit\nit into cases where you know the humans\nbelieve they're evaluating things\neffectively and you know have some you\nknow positive valuation but they're\ngoing to be cases where you know that\nthat is also not sufficient where the\nhumans believe they're evaluating\neffectively but in fact you know have\nsome you know limitation you know they\ndon't actually understand what's\nhappening effectively\nand so we still sort of have to in some\nsense go beyond so I think that like you\nknow a lot of these approaches are still\nsort of trying to address that problem\nright of you know how do we go beyond\nthat the feedback that a human is is is\nis ever able to provide you know\nsituations where the human is just\nconfused you know or you know doesn't\nknow that they're confused you know the\nhuman has some incorrect belief but in\nfact is just\num you know\nthinks they're things that they\nunderstand what's happening and that can\nhappen especially when you're training a\nmodel to say things that look good to a\nhuman right if you're training the model\nto produce you know rocket ship designs\nthat look really good to the human then\nyou're going to be in a situation where\nmany of those situations you know it's\ngoing to look good and the human's going\nto think it's great but in fact you know\nit's it's not gonna actually work in\npractice because it was only optimized\nto look good according to the human and\nso in some sense you know you need some\nbetter evaluation signal to sort of be\nable to address that\nI'm not sure whether this is Japanese or\nsomething else but do you have any\nresearch advice or people start envelope\ninto these theoretical questions\nyeah that's a good question\num\nI'm not going to try to like right now\nrecommend any like particular you know\nplaces or things to do because you know\nthe field is constantly changing I think\nthat\num in terms of just like you know in\ngeneral you know\njust you know listening to and\nunderstanding all these sorts of things\nI think is extremely important just like\nhaving the basic concepts understanding\nsorts of things that people are talking\nabout in the field and the way in which\nthe sort of the basic structure of the\nproblem is I think basically always\nvaluable and essentially any you know\nposition that one might be in\num\nI think that you know one of the sort of\nissues you know ways that the sort of\nfield currently is operating is that\num we just we don't know what to do\nright we're in a situation where we have\na lot of ideas we have things that might\nwork we have some reasons why they might\nwork or might not work but we don't know\nwhat to do right there isn't some you\nknow this is the thing that you know we\nneed to accomplish this is the you know\neveryone's on board with you know this\nis the approach right and so in that\nsituation I think it's very important to\nyou know have a good General\nunderstanding of like okay here are the\nsorts of things that are being discussed\nhere are sort of how to understand and\nsort of the basic concepts because it's\nvery unclear you know what the actual\ncorrect thing is to be doing and so\nhaving an open mind and being able to\ntry to figure out what the correct thing\nis to be doing is is really important\nanother sort of piece of advice that I\nwill often give that I think is\nimportant is\num\ntry to sort of you know I think it's\nreally valuable to uh for people to\nreally you know be individually like you\nknow specialized and usual things I\nthink that you know there's a lot of\nthings to be doing and uh you know it's\nreally important to understand like I\nwas just saying all of these various\ndifferent sort of you know Concepts and\nstuff but I think that you know then we\nalso have to do something right and so\nyou know making some you know bat trying\nto figure out some place where you can\nbe helpful and really and concretely\naccomplish something that you know you\nthink is useful and then really doing\nthat thing I think is you know you know\nwhat I what I think is really the most\nvaluable right and so you know I try to\nyou know in like the the mentorship\nprogram uh you know try to get people to\num you know\nunderstand the basic concepts and really\nunderstand how to think about AI safety\nand what sorts of interventions might be\neffective and then you know find an\nintervention you know that they can do\nsomething that you know might be helpful\nand really execute effectively on that\nso I think that's sort of that's very\nbroad but that's sort of generally how I\nthink about you know you know trying to\naddress this having having good concrete\nmodels about how things are going to go\nand what sort of you know ways in which\nthings might go poorly and things that\nyou can do to make things go better and\nthen you know Finding individual\nparticular interventions and trying to\nexecute as best you can\nokay uh we will call it there so that\nwas the last talk so this is uh you know\nthe end for this but hopefully I have\nyou know given you know a lot of good\nyou know tools and understanding and\nConcepts to sort of understand you know\nand help you know think about this sort\nof General field uh of aicd\nforeign", "date_published": "2023-05-13T15:57:29Z", "authors": ["Evan Hubinger"], "summaries": []}