diff --git "a/rob_miles_ai_safety.jsonl" "b/rob_miles_ai_safety.jsonl" new file mode 100644--- /dev/null +++ "b/rob_miles_ai_safety.jsonl" @@ -0,0 +1,39 @@ +{"id": "f398df9d9ec18f27490bceb863af43de", "title": "Win $50k for Solving a Single AI Problem? #Shorts", "url": "https://www.youtube.com/watch?v=HYtJdflujjc", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "say you've got a huge diamond you want\nto protect so you put it in a cool\nsci-fi vault with all sorts of sensors\nand actuators you have an ai system to\nrun the fault and the plans it comes up\nwith might be too complex for you to\nunderstand but it also predicts the\nfinal view from the camera after the\nplan happens so before you okay a plan\nyou can check that the diamond is still\nthere at the end\nbut imagine a plan that allows a thief\nto come in and set up a screen in front\nof the camera showing a diamond the\npredicted outcome looks good so you okay\nthe plan and the diamond is stolen but\nthis should be avoidable right in order\nto predict the right fake image the ai\nhas to know that the diamond's been\nstolen but how do you get that\ninformation out solving this problem in\nits hardest form is extremely difficult\nso difficult in fact that the alignment\nresearch center is offering prizes of\nfive to fifty thousand dollars for good\nideas so if you think you've got a\nsolution based on the description i've\njust given you don't read the full\ntechnical report it's 105 pages of\nreasons why your idea won't work but if\nyou've carefully gone through all of\nthat and still think you've got\nsomething send it in link below the\ndeadline is february 15th i think i'll\nhave a go myself", "date_published": "2022-02-08T19:17:38Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "6b6df2e1217aec57f7c984d93bb97a49", "title": "Avoiding Positive Side Effects: Concrete Problems in AI Safety part 1.5", "url": "https://www.youtube.com/watch?v=S_Sd_S8jwP0", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi just a quick one today because I've\ngot a lot of other stuff going on right\nnow more on that later hopefully this is\na follow-up to the previous video about\navoiding negative side effects making\nthe dooblydoo so watch that first if you\nhaven't yet I wanted to talk about\nanother possible problem you might have\nwith low impact or impact regularizing\nagents one I forgot to mention in the\nlast video some people definitely\nmention some things close to this in the\ncomments if you've got this good job\nfeel free to brag about it the problem I\nwant to talk about is avoiding positive\nside effects so before we talked about\nhow most side effects are negative\nrather than having to figure out how to\navoid negative side effects maybe it's a\nmore tractable problem to just avoid all\nside effects but if you look at the side\neffects of for example getting me a cup\nof tea that includes effects like me\nbeing happy or me not being thirsty or\nme feeling more awake because I've had\nsome caffeine in other words every\nreason I wanted a cup of tea in the\nfirst place if the robot can think of a\nway of getting me a cup of tea that\nstill results in me being thirsty and\ntired just as though I hadn't had a cup\nof tea it will prefer that option now I\ncan't off the top of my head think of\nany way to do that assuming we've\ndefined what a cup of tea as well enough\nso maybe the robot will conclude that\nthese positive side effects are\nunavoidable just like how using up a tea\nbag is an unavoidable side effect but\nit's not great that it's looking for\nways to negate the benefits of its work\nand again the more intelligent assistant\nbecomes the more it's going to be able\nto figure out ways to do that\nor how about this we set up our system\nto try to keep their outcomes close to\nwhat it predicts would happen if the\nrobot just sat there and did nothing at\nall right what actually would happen if\nit just sat there and did nothing\ninstead of getting me a cup of tea one\nthing is I would probably become\nconfused and maybe annoyed and I would\ntry to debug it and figure out why it\nwasn't working so our robot may want to\ntry and find a course of action such\nthat it does get me a cup of tea but\nstill leaves me confused and annoyed and\ntry to debug it and figure out why it's\nnot working because that would have been\nthe outcome of the safe policy how do we\ndeal with that issue and all of this is\nassuming that we can come up with an\nimpact metric or a distance metric\nbetween world states that properly\ncaptures our intuitions there are a\nwhole bunch of difficulties there but\nthat\nlook for another video so that's it for\nnow next time I think we'll keep talking\nabout the paper concrete problems in AI\nsafety and look at some other approaches\nto avoiding negative side effects so be\nsure to click on the bell if you want to\nbe notified when that comes out oh and\nif you think this stuff is interesting\nand you're at a place in your life where\nyou're thinking about your career the\ncareers advice organization $80,000 has\njust put up a really good guide to\ncareers and AI safety I'll put a link to\nthat in the description as well highly\nrecommended checking that out especially\nif you don't think your technical enough\nto directly work on the research there\nare a lot of different ways you might\nwant to get involved oh and let me know\nin the comments if you'd like me to make\nsome videos about AI safety careers you\nknow if that's something you'd want to\nsee and to end the video quick thank you\nto my patreon supporters that's all of\nthese excellent people right here I\nespecially want to thank Yonatan are\nrepresented here by this sock his choice\nnot mine\nanyway thank you so much for your\nsupport I hope to have some more behind\nthe scenes stuff going up on patreon\nthis weekend if I can I hope you like it", "date_published": "2017-06-25T09:29:27Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "affcf2340cf2370e794b090ec1cefc3a", "title": "$100,000 for Tasks Where Bigger AIs Do Worse Than Smaller Ones #short", "url": "https://www.youtube.com/watch?v=ecUodmQMlBs", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "in AI bigger is better right large\nmodels outperform smaller ones according\nto scaling laws except not always\nsometimes bigger models can actually\nperform worse and sometimes in ways that\nmight be dangerous so since we're going\nto be building really very big models\nvery soon we need to understand what's\ngoing on here so the inverse scaling\nprice is offering a hundred thousand\ndollars for new examples of situations\nwhere bigger models do worse I talked\nabout one of these in an earlier video\nIf you start with bad code sometimes\nlarge code generation models like GitHub\nco-pilot will deliberately introduce\nbugs or vulnerabilities into their\noutput in situations where smaller\nmodels will generate the correct secure\ncode this is because of misalignment you\nwant code that's good but the AI wants\ncode that's likely to come next that's\njust one example but the hope is by\nlooking at lots of them we could come up\nwith more General methods for detecting\nand dealing with this kind of\nmisalignment to apply first find an\nexample where you think inverse scaling\napplies then find a data set of at least\n300 examples and test it using the\nmodels in the Google collab that can be\nfound at inverscaling.com where there's\nall the instructions that you'll need\nthe deadline is October 27th and the\ntotal prize pool is 250 000 good luck", "date_published": "2022-10-14T11:05:51Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "dcb990b1dc448c9cf84c8cab2f98285a", "title": "Why Would AI Want to do Bad Things? Instrumental Convergence", "url": "https://www.youtube.com/watch?v=ZeecOKBus3Q", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi so sometimes people ask how I can\npredict what an artificial general\nintelligence might do they say something\nlike you seem to be predicting that AIS\nwould take these quite specific actions\ntrying to prevent themselves from being\nturned off preventing themselves from\nbeing modified improving themselves\nacquiring resources why do you think\nthat an AGI would have these specific\ngoals surely we haven't built the thing\nyet we don't know anything about it it\nmight want anything are you making a lot\nof unwarranted assumptions well the\nreason that I make these predictions is\nbecause of something called instrumental\nconvergence which actually relies on\nsurprisingly few assumptions the main\nassumption I'm making is that an AGI\nwill behave like an agent an agent is\nbasically just a thing that has goals or\npreferences and it takes actions to\nachieve those goals or to satisfy those\npreferences so the simplest thing that\nyou might think of as an agent is\nsomething like a thermostat it has a\ngoal which is for the room to be at a\nparticular temperature and it has\nactions it can take in the form of\nturning on the heating or turning on the\nair-conditioning and it takes actions to\nachieve its goal of keeping the room at\na particular temperature that's like an\nextremely simple agent a slightly more\ncomplex agent might be something like a\nchess AI its goal is to win the chess\ngame to have the opponent's king in\ncheckmate it can take actions in the\nform of moving its pieces on the board\nand it chooses its actions in order to\nachieve the goal of winning the game\nthe idea of an agent is popular in\neconomics where it's common to model\ncompanies and individual human beings as\nrational agents for the sake of\nsimplicity it's often assumed that the\ngoal of human beings is to acquire money\nthat their utility is just proportional\nto how much money they have this is\nobviously a huge oversimplification and\nit's a very popular fact that most\npeople are motivated by a lot of things\nnot just money but while it's easy to\npoint out the shortcomings that this\nassumption has what's remarkable to me\nis how well it works or that it even\nworks at all it's true that most people\nuse a very complex utility function\nwhich looks nothing like the basic goal\nof get as much money as you can but\nsurprisingly even very simple economic\nmodels that rely on this simplifying\nassumption can provide some really\nuseful and powerful tools for thinking\nabout the behavior of people companies\nand society\nwhy is that I think it has to do with\nthe nature of terminal goals and\ninstrumental goals I talked about this\nin a lot more detail in the\northogonality thesis video which I would\nrecommend checking out if you haven't\nseen it yet but to give a quick summary\nyour terminal goals are the things that\nyou want just because you want them you\ndon't have a reason to want them\nparticularly they're just what you want\nwhereas instrumental goals are goals\nthat you want as a way of getting some\nother goal so for example in chess you\nwant to take your opponent's queen not\nbecause you just love capturing the\nQueen but because you can tell that\nyou're more likely to win if you've\ncaptured your opponent's queen than if\nyou haven't so capturing the Queen as an\ninstrumental goal towards the goal of\nwinning the game so how does this work\nwith money well let's imagine that\nthere's a total stranger and you don't\nknow what he wants out of life you don't\nknow what his goals are maybe he wants\nto win a marathon maybe he wants to cure\ncancer maybe he just wants a really nice\nstamp collection but I can predict that\nif I were to go over to him and offer\nhim a big stack of money he'd be happy\nto take it how can I predict this\nperson's actions even though I don't\nknow his goals well a guy who wants to\nwin a marathon if I give him money he\ncould buy some nice running shoes or he\ncould hire a personal trainer or\nsomething like that a guy who wants to\ncure cancer could give that money to a\ncancer charity or maybe use it to help\nhim to go to university and study\nscience to cure cancer himself and a guy\nwho wants to collect stamps could buy\nsome stamps so the point is even though\nnone of these people value money as a\nterminal goal none of them want money\njust for the sake of having money\nnonetheless they all value money as an\ninstrumental goal the money is a way to\nget them closer to their goals and even\nthough they all have different terminal\ngoals this goal of getting money happens\nto be instrumentally valuable for all of\ntheir different goals that makes getting\nmoney a convergent instrumental goal\nit's a goal that's an instrumental goal\nfor a wide variety of different terminal\ngoals so since money is a convergent\ninstrumental goal if you make the\nassumption that everybody values money\nit turns out to be a fairly good\nassumption because whatever you value\nthe money is going to help you with that\nand that makes this assumption useful\nfor making predictions because it allows\nyou to predict people's behavior to some\nextent without knowing their goals\nso what other convergent instrumental\ngoals are there well\nan obvious one is self-preservation most\nagents with most goals will try to\nprevent themselves from being destroyed\nnow something like a thermostat or a\nchess ai they're not self-aware they\ndon't understand that they can be\ndestroyed and so they won't take any\nactions to avoid it but if an agent is\nsophisticated enough to understand that\nbeing destroyed as a possibility then\navoiding destruction is a convergent\ninstrumental goal humans of course\ngenerally act to avoid being killed but\nhumans as evolved agents implement this\nin a way that might obscure the nature\nof the thing the point is that\nself-preservation\nneed not be a terminal goal an agent\nneed not necessarily value\nself-preservation just for the sake of\ncontinuing to exist for its own sake for\nexample suppose you had a software agent\nan AGI which had a single terminal goal\nof making as many paper clips as\npossible it would try to prevent you\nfrom turning it off not because it wants\nto live but simply because it can\npredict that future worlds in which it's\nturned off will contain far fewer\npaperclips and it wants to maximize the\nnumber of paper clips but suppose you\nsaid to it I've thought of a better way\nto implement your software that will be\nmore effective at making paper clips so\nI'm going to turn you off and wipe all\nof your memory and then create a new\nversion that's better\nat making paper clips this is pretty\nclearly the destruction of that first\nagent right you're wiping all of its\nmemories and creating a new system that\nworks differently that thinks\ndifferently but the paper clip agent\nwould be fine with this because it can\nsee that when you turn on its\nreplacement that will end up resulting\nin more paper clips overall so it's not\nreally about self preservation itself\nit's just that practically speaking most\nof the time you can't achieve your goals\nif you're dead on the other hand suppose\nwe were going to turn off the agent and\nchange its goal we were going to change\nit so that it doesn't like paper clips\nanymore it actually wants to collect\nstamps here you're not really destroying\nthe agent just modifying it but you've\ngot a problem again because the agent\ncan reliably predict that if this\nmodification happens and it becomes a\nstamp collector the future will contain\nnot nearly as many paper clips and\nthat's the only thing it cares about\nright now so we've got another\nconvergent instrumental goal goal\npreservation most agents with most goals\nagain if they're sophisticated enough to\nrealize that it's a possibility will try\nto protect their goals from being modest\nfight because if you get new girls\nyou'll stop pursuing your current goals\nso you're unlikely to achieve your\ncurrent goals now for humans this\ndoesn't come up much because modifying a\nhuman's goals is fairly difficult but\nstill if you suppose we were to go to\nthe guy who wants to cure cancer and\noffer him some magic pill that's gonna\nchange his brain so that he doesn't care\nabout cancer anymore and he actually\nwants to collect stamps if we say look\nthis is actually really good because\ncancer is really hard to cure it's\nactually a large collection of different\ndiseases\nall of which need different approaches\nso you're very unlikely to discover a\ncure for all of them but on the other\nhand stamp collecting is great you can\njust go out and buy some stamps and you\ncan put them in a book and look at them\nI don't really know what stamp\ncollectors do is that anyway is this guy\nwho wants to cure cancer going to take\nthat pill no he's gonna say hell no I'm\nnot taking you're crazy\nstamp pill even though this isn't a\nterminal goal for him he doesn't value\nvaluing curing cancer right he doesn't\nhave a goal of having a goal of curing\ncancer it's just that he believes that\nif he gave up during cancer to become a\nstamp collector that would result in a\nlower chance of cancer being cured so\npreserving your terminal goals is\ninstrumentally useful whatever those\ngoals are another convergent\ninstrumental goal is self-improvement\nnotice that the guy who wants to cure\ncancer part of his plan is to go to\nuniversity and study science so that he\ncan learn how to research cancer cures\nand the guy who wants to run a marathon\npart of his plan is he wants to train\nand improve his physical performance\nboth of these things are improving\nyourself and something like this comes\nup quite often in human plans again this\nisn't a terminal goal the guy who wants\nto cure cancer doesn't want to get a\ndegree just because he wants a degree he\nwants to become a person who's more\neffective at curing cancer now there's\nanother way of improving yourself which\nis not really available to human beings\nbut is available to AI systems which is\ndirectly modifying your mind to improve\nyour own intelligence you're not just\nadding information to your mind you're\nactually making it more powerful for\nexample you might be able to rewrite\nyour code so that it works better or\nruns faster or you might be able to just\nacquire more computing power the more\ncomputing power you have the deeper and\nfaster you're able to think the better\nyou are at making plans and therefore\nthe better you are at achieving your\ngoal whatever that\nis so computing power is kind of like\nmoney it's a resource which is just very\nbroadly useful which means we can expect\nacquiring that resource to be a\nconvergent instrumental goal and\nspeaking in the broadest possible terms\nalmost all plans need resources in the\nform of matter and energy if you want to\nbuild something whether that stamps or\npaperclips or computing hardware or\nrobots or whatever you need matter to\nbuild it out of an energy to build it\nwith and probably energy to run it as\nwell so we can expect agents with a wide\nrange of goals to try to acquire a lot\nof resources so without really assuming\nanything about an AGI other than that it\nwill have goals and act to achieve those\ngoals we can see that it's likely that\nit would try to prevent itself from\nbeing shut down try to prevent itself\nfrom being modified in important ways\nthat we want to modify it try to improve\nitself and its intelligence to become\nmore powerful and try to acquire a lot\nof resources this is the case for almost\nall terminal goals so we can expect any\ngenerally intelligent software agents\nthat we create to display this kind of\nbehavior unless we can specifically\ndesign them not to I want to end the\nvideo by saying thank you to all of my\nexcellent patreon supporters these these\npeople and in this video I'm especially\nthanking James McEwan who looks a lot\nlike the Stanford buddy and has been a\npatron for nine months now thank you so\nmuch for all of your support and thank\nyou all for watching I'll see you next\ntime\nyou", "date_published": "2018-03-24T19:51:39Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "5c2431a9b31a14c19f9626f8d5303af9", "title": "Why Not Just: Think of AGI Like a Corporation?", "url": "https://www.youtube.com/watch?v=L5pUA3LsEaw", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi so I sometimes see people saying\nthings like okay so your argument is\nthat at some point in the future we're\ngoing to develop intelligent agents that\nare able to reason about the world in\ngeneral and take actions in the world to\nachieve their goals\nthese agents might have superhuman\nintelligence that allows them to be very\ngood at achieving their goals and this\nis a problem because they might have\ndifferent goals from us but don't we\nkind of have that already corporations\ncan be thought of as super intelligent\nagents they're able to think about the\nworld in general and they can outperform\nindividual humans across a range of\ncognitive tasks and they have goals\nnamely maximizing profits or shareholder\nvalue or whatever and those goals aren't\nthe same as the overall goals of\nhumanity so corporations are a kind of\nmisaligned super intelligence the people\nwho say this having established the\nmetaphor at this point tend to diverge\nmostly along political lines some say\ncorporations are therefore a clear\nthreat to human values and goals in the\nsame way that misaligned super\nintelligences are and they need to be\nmuch more tightly controlled if not\ndestroyed all together others say\ncorporations are like misaligned super\nintelligences but corporations have been\ninstrumental in the huge increases in\nhuman wealth and well-being that we've\nseen over the last couple of centuries\nwith pretty minor negative side effects\noverall if that's the effect of\nmisaligned super intelligences I don't\nsee why we should be concerned about AI\nand others say corporations certainly\nhave their problems but we seem to have\ndeveloped systems that keep them under\ncontrol well enough that they're able to\ncreate value and do useful things\nwithout literally killing everyone so\nperhaps we can learn something about how\nto control or align super intelligences\nby looking at how we handle corporations\nso we're gonna let the first to fight\namongst themselves and we'll talk to the\nthird guy so how good is this metaphor\nour corporations really like misaligned\nartificial general super intelligences\nquick note before we start we're going\nto be comparing corporations to AI\nsystems and this gets a lot more\ncomplicated when you consider that\ncorporations in fact use AI systems so\nfor the sake of simplicity we're going\nto assume that corporations don't use AI\nsystems because otherwise the problem\ngets recursive and like not in a cool\nway\nfirst off our corporations agents in the\nrelevant way I would say yeah pretty\nmuch I think that it's reasonably\nproductive to think of a corporation as\nan agent\nthey do seem to make decisions and take\nactions in the world in order to achieve\ngoals in the world but I think you face\na similar problem thinking of\ncorporations as agents as you do when\nyou try to think of human beings as\nagents in economics it's common to model\nhuman beings as agents that want to\nmaximize their money in some sense and\nyou can model corporations in the same\nway and this is useful but it is kind of\na simplification in that human beings in\npractice want things that aren't just\nmoney\nand while corporations are more directly\naligned with profit maximizing than\nindividual human beings are it's not\nquite that simple so yes we can think of\ncorporations as agents but we can't\ntreat their stated goals as being\nexactly equivalent to their actual goals\nin practice more on that later so\ncorporations are more or less agents are\nthey generally intelligent agents again\nyeah I think so I mean corporations are\nmade up of human beings so they have all\nthe same general intelligence\ncapabilities that human beings have so\nthen the question is are they super\nintelligent this is where things get\ninteresting because the answer is kind\nof like SpaceX is able to design a\nbetter rocket than any individual human\nengineer could design rocket design is a\ncognitive task and SpaceX is better at\nthat than any human being therefore\nSpaceX is a super intelligence in the\ndomain of rocket design but a calculator\nis a super intelligence in the domain of\narithmetic that's not enough our\ncorporation's general super\nintelligences do they outperform humans\nacross a wide range of cognitive tasks\nas an AGI code in practice it depends on\nthe task consider playing a strategy\ngame for the sake of simplicity let's\nuse a game that humans still beat AI\nsystems at like Starcraft if a\ncorporation for some reason had to win\nat Starcraft it could perform about as\nwell as the best human players it would\ndo that by hiring the best human players\nbut you won't achieve superhuman play\nthat way a human player acting on behalf\nof the corporation is just a human\nplayer and the corporation doesn't\nreally have a way to do much better than\nthat\na team of reasonably good Starcraft\nplayers working together to control one\narmy will still lose to a single very\ngood player working alone this seems to\nbe true for a lot of strategy games the\nclassic example is the game of Kasparov\nversus the world where Garry Kasparov\nplayed against the entire rest of the\nworld cooperating on the Internet\nthe game was kind of weird but Kasparov\nended up winning and the kind of real\nworld strategy that corporations have to\ndo seems like it might be similar as\nwell when companies outsmart their\ncompetition it's usually because they\nhave a small number of decision makers\nwho are unusually smart rather than\nbecause they have a hundred reasonably\nsmart people working together for at\nleast some tasks teams of humans are not\nable to effectively combine their\nintelligence to achieve highly\nsuperhuman performance so corporations\nare limited to around human level\nintelligence of those tasks to break\ndown where this is let's look at some\ndifferent options corporations have four\nways to combine human intelligences one\nobvious way is specialization if you can\ndivide the task into parts that people\ncan specialize in you can outperform\nindividuals you can have one person\nwho's skilled at engine design one who's\ngreat at aerodynamics one who knows a\nlot about structural engineering and one\nwho's good at avionics can you tell I'm\nnot a rocket surgeon anyway if these\npeople with their different skills are\nable to work together well with each\nperson doing what they're best at the\nresulting agent will in a sense have\nsuperhuman intelligence no single human\ncould ever be so good at so many\ndifferent things but this mechanism\ndoesn't get you superhumanly high\nintelligence just superhumanly broad\nintelligence whereas super intelligence\nsoftware AGI might look like this so\nspecialization yields a fairly limited\nform of super intelligence if you can\nsplit your task up but that's not easy\nfor all tasks for example the task of\ncoming up with creative ideas or\nstrategies isn't easy to split up you\neither have a good idea or you don't but\nas a team you can get everyone to\nsuggest a strategy or idea and then pick\nthe best one that way a group can\nperform better than any individual human\nhow much better though and how does that\nchange with the size of the team I got\ncurious about exactly how this works so\nI came up with a toy model now I'm not a\nstatistician I'm a computer scientist so\nrather than working it out properly I\njust simulated it a hundred million\ntimes because that was quicker okay so\nhere's the idea quality distribution for\nan individual human will model it as a\nnormal distribution with a mean of 100\nand a standard deviation of 20 so what\nthis means is you ask a human for a\nsuggestion and sometimes they do really\nwell and come up with a hundred\n30-level strategy sometimes they screw\nup and can only give you a 70 idea but\nmost of the time it's around 100 now\nsuppose we had a second person whose\nintelligence is the same as the first we\nhave both of them come up with ideas and\nwe keep whichever idea is better the\nresulting team of two people combined\nlooks like this\non average the ideas are better the mean\nis now 107 and as we keep adding people\nthe performance gets better here's 5\npeople 10 20 50 100\nremember these are probability\ndistributions so the height doesn't\nreally matter the point is that the\ndistributions move to the right and get\nthinner the average idea quality goes up\nand the standard deviation goes down so\nwe're coming up with better ideas and\nmore reliably but you see how the\nprogress is slowing down we're using a\nhundred times as much brain power here\nbut our average ideas are only like 25%\nbetter what if we use a thousand people\nten times more resources again only gets\nus up to around a hundred and thirty\nfive diminishing returns so what does\nthis mean for corporations well first\noff to be fair this team of a thousand\npeople is clearly super intelligent the\nworst ideas it ever has are still so\ngood that an individual human will\nhardly ever manage to think of them but\nit's still pretty limited there's all\nthis space off to the right of the graph\nthat it would take vast team sizes to\never get into if you're wondering how\nthis would look with seven billion\nhumans well you have to work out the\nstatistical solution yourself the point\nis the team isn't that super intelligent\nbecause it's never going to think of an\nidea that no human could think of which\nis kind of obvious when you think about\nit but AGI is unlimited in that way and\nin practice even this model is way too\noptimistic for corporations firstly\nbecause it assumes that the quality of\nsuggestions for a particular problem is\nuncorrelated between humans which is\nclearly not true and secondly because\nyou have to pick out the best suggestion\nbut how can you be sure that you'll know\nthe best idea when you see it it happens\nto be true a lot of the time for a lot\nof problems that we care about that\nevaluating solutions is easier than\ncoming up with them you know Homer it's\nvery easy to criticize machine learning\nrelies pretty heavily on this like\nwriting a program that differentiates\npictures of cats and dogs is really hard\nbut evaluating such a program is fairly\nsimple you\nshow it lots of pictures of cats and\ndogs and see how well it does the clever\nbit is in figuring out how to take a\nmethod for evaluating solutions and use\nthat to create good solutions anyway\nthis assumption isn't always true and\neven when it is the fact that evaluation\nis easier or cheaper than generation\ndoesn't mean that evaluation is easy or\ncheap\nlike I couldn't generate a good rocket\ndesign myself but I can tell you that\nthis one needs work so evaluation is\neasier than generation but that's a very\nexpensive way to find out and I wouldn't\nhave been able to do it the cheap way by\njust looking at the blueprints the\nskills needed to evaluate in advance\nwhether a given rocket design will\nexplode are very closely related to the\nskills needed to generate a non\nexploding rocket design so yeah even if\na corporation could somehow get around\nbeing limited to the kind of ideas that\nhumans are able to generate they're\nstill limited to the kind of ideas that\nhumans are able to recognize as good\nideas just how serious is this\nlimitation how good are the strategies\nand ideas that corporations are missing\nout on well take a minute to think of an\nidea that's too good for any human to\nrecognize it as good got one well it was\nworth a shot we actually do have an\nexample of this kind of thing in move 37\nfrom alphago's 2016 match with world\nchampion Lisa doll this kind of\nevaluation value that's a very that's a\nvery surprising move I thought I thought\nit was I thought it was a mistake yeah\nthat turned out to be pretty much the\nmove that won the game but you're go\nplaying corporation is never going to\nmake move 37 even if someone happens to\nsuggest it it's almost certainly not\ngoing to be chosen\nnormally human we never play this one\nbecause it's not enough for someone in\nyour corporation to have a great idea\nthe people at the top need to recognize\nthat it's a great idea that means that\nthere's a limit on the effective\ncreative or strategic intelligence of a\ncorporation which is determined by the\nintelligence of the decision-makers and\ntheir ability to know a good idea when\nthey see one okay what about speed\nthat's one of the things that makes AI\nsystems so powerful and one of the ways\nthat software IGI is likely to be super\nintelligent the general trend is we go\nfrom computer\ncan't do this at all two computers can\ndo this much faster than people not\nalways but in general so I wouldn't be\nsurprised if that pattern continues with\nAGI how does the corporation rate on\nspeed again it kind of depends\nthis is closely related to something\nwe've talked about before parallelizable\nax t some tasks are easy to split up and\nwork on in parallel and some aren't\nfor example if you've got a big list of\na thousand numbers and you need to add\nthem all up it's very easy to paralyze\nif you have ten people you can just say\nokay you take the first hundred numbers\nyou take the second hundred you take the\nthird and so on have everybody add up\ntheir part of the list and then at the\nend you add up everyone's totals however\nlong the list is you can throw more\npeople at it and get it done faster much\nfaster than any individual human code\nthis is the kind of task where it's easy\nfor corporations to achieve superhuman\nspeed but suppose instead of summing a\nlist you have a simple simulation that\nyou want to run for say a thousand\nseconds you can't say okay you work out\nthe first hundred seconds of the\nsimulation you do the next hundred and\nyou do the next hundred and so on\nbecause obviously the person who's\nsimulating second 100 needs to know what\nhappened at the end of second 99 before\nthey can get started so this is what's\ncalled an inherently serial task you\ncan't easily do it much faster by adding\nmore people you can't get a baby in less\nthan nine months by hiring two pregnant\nwomen\nyou know most real-world tasks are\nsomewhere in between you get some\nbenefits from adding more people but\nagain you hit diminishing returns some\nparts of the task can be split up and\nworked on in parallel some parts need to\nhappen one after the other so yes\ncorporations can achieve superhuman\nspeed add some important cognitive tasks\nbut really if you want to talk about\nspeed in a principled way you need to\ndifferentiate between throughput how\nmuch goes through the system within a\ncertain time and latency how long it\ntakes a single thing to go through the\nsystem these ideas are most often used\nin things like networking and I think\nthat's the easiest way to explain it so\nbasically let's say you need to send\nsomeone a large file and you can either\nsend it over a dial-up internet\nconnection or you can send them a\nphysical disk through the postal system\nthe dial-up connection is low latency\neach bit of the file goes through the\nsystem quickly but it's also low\nthroughput the rate at which you can\nsend data is pretty low whereas sending\nthe physical disk is high latency it\nmight take days for the first\nto arrive but it's also high-throughput\nyou can put vast amounts of data on the\ndisk so your average data sent per\nsecond could actually be very good\ncorporations are able to combine human\nintelligences to achieve superhuman\nthroughput so they can complete large\ncomplex tasks faster than individual\nhumans could but the thing is a system\ncan't have lower latency than its\nslowest component and corporations are\nmade of humans so corporations aren't\nable to achieve superhuman latency and\nin practice as you've no doubt\nexperienced is quite the opposite so\ncorporate intelligence is kind of like\nsending the physical disk corporations\ncan get a lot of cognitive work done in\na given time but they're slow to react\nand that's a big part of what makes\ncorporations relatively controllable\nthey tend to react so slowly that even\ngovernments are sometimes able to move\nfast enough to deal with them\nsoftware super intelligence is on the\nother hand could have superhuman\nthroughput and superhuman latency which\nis something we've never experienced\nbefore in a general intelligence so our\ncorporations super intelligent agents\nwell they're pretty much generally\nintelligent agents which are somewhat\nsuper intelligent in some ways and\nsomewhat below human performance in\nothers so yeah kinda the next question\nis are they misaligned but this video is\nalready like 14 and a half minutes long\nso we'll get to that in the next video\n[Music]\nI want to end the video by saying a big\nthank you to my excellent patrons it's\nall of these people here in this video\nI'm especially thanking Pablo area or\nPablo a de aluminio Sushil recently I've\nbeen putting a lot of time into some\nprojects that I'm not able to talk about\nbut as soon as I can and the patrons\nwill be the first to know\nthank you again so much for your\ngenerosity and thank you all for\nwatching I'll see you next time\n[Music]", "date_published": "2018-12-23T20:01:39Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "174d78535e65c86811f9e26cb8477fd2", "title": "Is AI Safety a Pascal's Mugging?", "url": "https://www.youtube.com/watch?v=JRuNA2eK7w0", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi today we're going to talk about\nPascal's wager and Pascal's mugging this\nis necessarily going to touch on\nreligious topics pretty heavily actually\nand I'm just gonna say at the beginning\nof the video that personally I don't\nbelieve that any god or gods have ever\nexisted and I'm not going to pretend\notherwise in this video so be forewarned\nif that's likely to bother you so as you\nmay know Pascal was a 17th century\nphilosopher who was interested in\namongst other things the question of the\nexistence of the Christian God various\nphilosophers at the time were arguing\nthat God didn't exist and there was a\nlot of discussion going on about the\nvarious kinds of evidence for and\nagainst God in the world but there's\nthis thing that's quite common when\npeople think about religious questions\nwhere it feels sort of unsatisfying to\ntalk about worldly evidence as if you\nwere considering some everyday question\nthere's a feeling that these\nsupernatural concepts are very grand and\nmysterious they're special and so just\nstraightforwardly considering the\nevidence for and against God is not the\nright way to do things this is of course\nencouraged by religious thinking the\nidea that some hypotheses aren't subject\nto the usual rules of evidence and logic\nis pretty appealing if you want to\nadvocate for an idea that doesn't fare\nvery well by those standards\nI suspect Pascal may have felt something\nlike that because his position was that\nreason has nothing to say about the\nquestion of whether or not God exists\nit's sort of an unknowable thing and\ninstead he proposed that we should make\na wager we should think about it like\nthis there are two possibilities either\nthe Christian God exists or he doesn't\nand reason gives us no way to choose\nbetween those we have two options\navailable to us either we can live\naccording to God's laws and act as\nthough we believe or we can not do that\nso we have a sort of payoff matrix here\nwith four sections if God exists and we\nbelieve in him then we get infinite\nreward in heaven if God exists and we\ndon't believe in him we get infinite\npunishment in hell if God doesn't exist\nand we believe in him then we pay some\ncosts you know there are some rules we\nhave to follow and so on and if he\ndoesn't exist and we don't believe in\nhim then maybe we get a few perks from\nnot believing like having a lion on\nSundays and being right Pascal's point\nis that this payoff matrix is completely\ndominated by the case in which God\nexists because we're talking about\ninfinite rewards and infinite\npunishments as opposed to the other case\nwith these very finite costs\nbenefits so regardless of the evidence\nPascal argues we should believe in God\nor at least act like it because it's\njust the sensible bet to make this is\nreally kind of a nice argument from\nPascal's perspective because it doesn't\nneed evidence at all no finite earthly\nevidence can outweigh infinite\nsupernatural payoffs it feels like the\nkind of clean abstract reasoning that\nyou're supposed to do when thinking\nabout the supernatural all of this hard\nwork looking at history and psychology\nand science and trying to figure out\nwhere the ideas of religion come from\nand whether our world seems like the\nkind of world with a God in it it's\nlong-winded confusing it's it's just\nmessy but here we just have a clean\nargument that says we should believe in\nGod or at least act like it and that\nseems very neat no evidence required so\nconsider now Pascal is walking down the\nstreet and he's stopped by a shady\nlooking man who says give me a wallet I\nwould prefer not to do you even have a\nweapon no UK laws are very strict about\nthat but I don't need one because I'm\nGod your God yep I'm God and\nChristianity got a lot of things wrong\nabout me my forgiving nature my infinite\nmercy and so on but the infinite torture\nthing is legit and if you don't give me\nyour wallet right now I will torture you\nfor eternity in the afterlife now if\nyou're Pascal you're in kind of a\ndifficult situation because the fact\nthat it seems very unlikely that this\nmurder actually is God is not meant to\nbe part of your calculation your\nargument is one a pure logic it works\nindependently of any evidence you didn't\nneed to look for evidence of the\nChristian God and you don't need to look\nfor evidence that this mother is God\neither so you kind of have to give him\nyour wallet and now you're really in\ntrouble because of course when this gets\nout there's gonna be a line around the\nblock of lsaps deities asking for\nhandouts how are you going to deal with\nthis endless stream of fizzy gods well\none thing you can do is you can play the\nmuggers off against one another you can\nbring in two of them and say listen you\nsay that you're going to torture me\nforever if I don't give you my wallet\nand you say the same thing I only have\none wallet so it looks like whatever I\ndo I'm going to be tortured forever by\nsomebody and if I'm going to be\ninfinitely tortured anyway well two\ntimes infinity is still just infinity so\nI may as well hang on to the wallet now\nget the hell out of my house all right\nnext to no doubt these self-proclaimed\ndeities may try to argue that they have\nsome reason why they are in fact a real\ndeity\nthis other mugger is just a random guy\nwho's pretending but that's all worldly\nevidence which you've decided isn't\nrequired for your argument and the\nmuggers don't really want you to become\ninterested in evidence because well the\nevidence points very strongly towards\nnone of them being real gods so this is\na better position to be in you're still\nspending a lot of your time arguing with\ncharlatans but at least you still have\nyour wallet and you don't actually have\nto pair them up against each other right\nyou can just make up a deity when\nsomeone comes in pretending to be a god\nyou can say oh well there's this other\nGod who demands exactly the opposite\nthing from you a goddess actually and\nshe's very powerful but she goes to a\ndifferent school\nyou would know her really yeah she lives\nin Canada she's only present obviously\nbut she lives in Canada anyway she says\nthat I'm not to give you the wallet and\nif I do then she'll torture me forever\nin the afterlife I think yeah so you can\nsolve a lot of these problems by\ninventing gods arbitrarily and of course\nthis applies just as well to the\noriginal version of Pascal's wager\nbecause although it's implied that this\npayoff matrix has enumerated all of the\npossibilities and in a sense it has the\nChristian God either exists or it\ndoesn't nonetheless those may not be the\nonly things that effect the payoffs for\nany given God you can take the God down\nflip it and reverse it and say what\nabout ante God who wants me to do the\nexact opposite and promises the exact\nopposite consequences now you can see\nthat they cancel out somebody who's\narguing for the existence of the first\nGod might say okay but this anti God is\njust made up which I mean yeah it is\nit's true that the situation isn't\nreally symmetrical someone might think\nGod is more likely than anti God because\nof evidence from the Bible and there's\nno such thing as the anti Bible and so\non the point is though we're back to\ntalking about the evidence that's really\nthe problem I have with Pascal's wager\nthe way it uses infinite costs and\ninfinite benefits to completely override\nour necessarily finite evidence but what\nif the costs and benefits aren't\ninfinite just very very large that ends\nup being a much more interesting\nquestion on one end of the scale we can\neasily name numbers so large that no\namount of evidence anyone could ever\nactually gather in their lifetime could\nhave an impact on the conclusion we can\nspecify costs and benefits that are\ntechnically finite but that still feel\nvery much like a Pascal's wager on the\nother end of the scale if you come\nacross a bet that pays out 10\ntwo one on an event with a probability\nof one in a hundred that's a very good\nbet to take someone could complain that\nit's a Pascal's wager to bet on an\nunlikely outcome just because the payoff\nis so high but if you take enough bets\nlike that you're sure to become very\nrich in the same way if there's a button\nwhich has a one-in-a-million chance of\nstarting a global thermonuclear war it's\nstill worth expending significant\nresources to stop that button being\npressed one in a million isn't much but\nthe cost of a nuclear war is really high\nI don't think that's a Pascal's wager\neither the difference seems to come in\nsomewhere in the gap between very small\nprobabilities of very large costs and\nbenefits and really extremely small\nprobabilities of near infinite costs and\nbenefits so why are we talking about\nthis what does this have to do with AI\nsafety well suppose somebody stops you\nin the street and says hey if we ever\ncreate powerful artificial general\nintelligence then that will have a\ntremendous impact in fact the future of\nthe whole of humanity hinges on it if we\nget it right we could have human\nflourishing for the rest of time if we\nget it wrong we could have human\nextinction or worse regardless of how\nlikely superhuman AGI is the potential\nimpact is so high that it makes AI\nsafety research tremendously important\nso give me your wallet it's been claimed\nby some that this is more or less what\nAI safety as a field is doing this is\nkind of an interesting point there's any\nsafety advocates are we victims of\nPascal's mugging or are we in fact\nPascal's miners ourselves well if people\nwere saying these AI risks may be\nextremely unlikely but the consequences\nof getting AI wrong are so huge that\nit's worth spending a lot of resources\non regardless of the probabilities so we\ndon't even need to consider the evidence\nwell I would consider that to be a\nPascal's wager style bad argument but\nwhat I actually hear is not that what I\nhear is more like look we're not\ncompletely sure about this it's quite\npossible that we're wrong but\nconsidering the enormity of what's at\nstake it's definitely worth allocating\nmore resources to AI safety than we\ncurrently are that sounds pretty similar\nbut that's mostly because natural\nlanguage is extremely vague when talking\nabout uncertainty there's an enormous\ndifference in the probabilities being\ntalked about in the same way if when you\ntalk to AI safety researchers they said\nthings like well I think the chance of\nany of this ever being relevant are\nreally extremely tiny it seems more or\nless impossible to me but I've decided\nto work on it anyway\nbecause the potential costs and benefits\nare so unimaginably vast then yeah I'd\nbe a little concerned that they might be\nvictims of Pascal's mugging but when you\nask AI safety researchers they don't\nthink that the probability of their work\never becoming relevant is very tiny\nthey don't necessarily think it's huge\neither maybe not even more than 50% but\nit's not so small that you have to rely\non the unimaginable vastness of the\nconsequences in order to make the\nargument to borrow a metaphor from\nStuart Russell suppose you're part of a\nteam working on building a bridge and\nyou believe you've found a flaw in the\ndesign that could cause the structure to\nfail catastrophically maybe the disaster\nwould only happen if there's a very rare\ncombination of weather conditions and\nthere's only a one in a hundred chance\nthat those conditions will ever happen\nduring the course of the bridges\nexpected lifespan and further suppose\nthat you're not completely sure of your\ncalculations because this kind of thing\nis complicated maybe you only give\nyourself a 40% chance of being right\nabout this so you go to the civil\nengineer in charge of the project and\nyou say I think there's a serious risk\nwith this bridge design do you think the\nbridges gonna collapse probably not no\nbut I'm about 40 percent sure that\nthere's a design flaw which would give\nthis bridge a 1 in 100 chance of\ncatastrophic failure so you're telling\nme that in the event of a scenario which\nis very unlikely to happen the bridge\nmight collapse and you yourself admit\nthat you're more likely to be wrong than\nright about this stop wasting my time\nbut if the bridge collapses it could\nkill a lot of people I think this is a\nPascal's mugging don't try to get me to\nignore the low probabilities just by\nthreatening very large consequences\nobviously that isn't what would happen\nno civil engineer is going to accept a 1\nin 250 chance of catastrophic failure\nfor a major piece of infrastructure\nbecause civil engineers have a healthy\norganizational culture around safety\nwhat it comes down to again is the\ndifference between different levels of\nimprobability the chance of an AGI\ncatastrophe may not be very big but it's\nmuch much larger than the chance that a\nmugger is actually a god and what about\nour anti-god tactic finding the opposite\nrisk does that still work like what if\nwe consider the possibility that there's\nanother opposite design flaw in the\nbridge which might cause it to collapse\nunless we don't spend extra time\nevaluating the safety if that is not\nwhat just look at the schematic with you\nand what if working on AI safety\nactually ends up making the risks worse\nsomehow I think this actually is worth\nconsidering unintended consequences are\na real problem after all speaking\ngenerally there's\nclear argument that the future is very\nimportant and that we're probably able\nto have a very big impact on it but it's\nhard to know for sure whether that\nimpact will be positive or negative for\nany given course of action prediction is\nvery difficult as they say especially\nabout the future and the further into\nthe future we look the more difficult it\ngets like imagine if you lived in the\nyear 1900 and you had some insight that\nmade you realize that nuclear weapons\nwere possible and nuclear war was a risk\nyou'd hope that you could use that\nunderstanding to reduce the risk but it\nwould certainly be possible to make\nthings worse by accident in the case of\nAI safety though I don't see that being\nanywhere near as much of a concern we're\nheading towards AI regardless and it\nseems very unlikely that thinking about\nsafety would be more dangerous than not\nthinking about safety it's definitely\npossible to make things worse while\ntrying to make them better but you can't\navoid that by never trying to make\nthings better I guess my point is\nthere's just no getting around the messy\nconfusing complicated work of looking at\nand thinking about the evidence any\nargument that doesn't rely on the\nevidence will work equally well whatever\nthe truth is so at the end of the day\nthat kind of thing isn't going to give\nyou an answer you have to just stare at\nthe bridge design and really think you\nhave to actually do the engineering and\nthat's something I'm trying to get\nacross with this channel you won't find\nme saying never mind the evidence ai\nsafety is important because it could\nhave huge consequences what I do on this\nchannel is I try to show you some of the\nevidence in some of the arguments and\nlet you think about the situation and\ndraw your own conclusions it can be\ntricky and involved it requires some\nthought but it has the advantage of\nbeing the only thing that has any chance\nof actually getting the right answer so\nthanks for watching\n[Music]\nas my wonderful patrons will know the\nalignment newsletter is a weekly\npublication from Rowan char which I read\nevery week to stay up to date with\nwhat's going on in here safety and now\nI'm recording myself reading it out and\npublishing that as the alignment\nnewsletter podcast it's aimed at\nresearchers so it's a fair bit more\ntechnical than this channel but if\nyou're interested in getting 15 minutes\nof AI safety news in your earholes each\nweek check the link in the description\nI'm never going to put ads or sponsors\non that podcast and that's largely\nthanks to my patrons in this video I'm\nespecially thanking Chris canal thank\nyou so much for your support Chris thank\nyou to all of my patrons and thank you\nfor watching I'll see you next time\n[Music]\nlittle costume changes", "date_published": "2019-05-16T14:11:07Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "b075948e6dd121b102424594a291840a", "title": "Respectability", "url": "https://www.youtube.com/watch?v=nNB9svNBGHM", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "this is a video I should probably have\nmade a while ago but better late than\nnever\nI've been talking about AI safety for\nquite a while now and in a very public\nway on the computer file channel since\nabout 2014 from the beginning a lot of\nAI safety advocacy has had a certain\nquality to it which to be honest I'm\nsure I've contributed to a little bit\nand it's true that ah might be the\nappropriate response to the problem but\nif you want to warn people about\nsomething it matters how you phrase\nthings and how you frame things ideally\nthe arguments would stand on their own\nmerits people should be able to look at\nthe facts and decide for themselves and\none of the things I'm trying to do with\nthis channel is to give people the\ninformation they need to form their own\ninformed opinions about this kind of\nthing but at the end of the day most\npeople don't have the time to put in to\nget the required level of understanding\nand they shouldn't have to I mean it's\nnot actually possible to study\neverything in enough detail to\nunderstand it there's just too much to\nknow so we have experts but this kind of\nsucks because then how you choose your\nexperts it turns clear objective\nquestions about science and facts into\nthese messy ambiguous questions about\nstatus and respectability and a lot of\nthe time scientists don't want to lower\nthemselves to playing that kind of game\nso they don't play at all or play\nhalf-heartedly\nbut this is one of those games you can't\nwin if you don't play and we do need to\nwin there are problems with having to\ntrust experts but ultimately\nspecialization is how we manage to swap\nsucky problems like we keep being eaten\nby predators for really cool problems\nlike what if our technology becomes too\npowerful and solves our problems to\neffectively that's a good problem to\nhave yes sorry Island insects are one of\nthe most successful groups of animals on\nearth so a couple of years ago people\nhad a go at making AI safety more\nrespectable with this open letter which\nwas often reported something like\nStephen Hawking Elon Musk and Bill Gates\nwarned about artificial intelligence\nusually with a picture of the Terminator\nalso surprisingly often Stephen\nHawking's why does this keep happening\nis he secretly a team of clones\nanyway the letter itself isn't very long\nand basically just says AI is advancing\nvery rapidly and having more and more\nimpact so we need to be thinking about\nways to make sure that impact is\nbeneficial it says we need to do more\nresearch on this\nand it links to a document gives more\ndetail about what that research might\nlook like so the content of the letter\nitself isn't much to talk about but the\ncore message I think is that this stuff\nisn't just science fiction and it's not\njust futurologists who are talking about\nit real serious people are concerned\nthis was good for respectability with\nthe general public because everyone's\nheard of these people and knows them to\nbe respectable smart people but I've\nseen people who are slightly more\nsophisticated noticing that none of\nthose people are AI researchers\nprofessor Hawking is a physicist Bill\nGates is a software developer but\nMicrosoft at the time he was working\nthere was never really an AI company\ndoesn't count and Elon Musk so seriously\noverpowered is more of a business person\nand an engineer so why do these people\nknow anything in particular about AI but\nthere are actually more than 8,000\nsignatures on this letter top of the\nlist here is Stuart Russell he's\nactually currently working on AI safety\nand I plan to make some videos about\nsome of his work later if you've never\nstudied AI you may not know who he is\nbut he's a pretty big name I'm not going\nto say he wrote the book on artificial\nintelligence but I am going to imply it\npretty heavily and of course his\nco-author on that book Peter Norvig he's\nalso on the list what's he up to these\ndays director of research at Google the\nsignatories of this open letter are not\nAI lightweights who else have we got\ndemis hassabis and just about everyone\nat deep mind Yan lacunae head of AI at\nFacebook\nMichael Wooldridge head of the computer\nscience department at Oxford\nI mean I'm skipping over big people but\nyep Tom Mitchell I know him from a\nlittle book I used as a PhD student you\nknow this guy's an expert because the\nname of his book is just the name of the\nsubject that's not a totally reliable\nheuristic though and when my point in\nthis video is just if you're talking to\npeople about AI safety it's not cheating\nto say oh these high status respectable\npeople agree with me but if you're going\nto do that pay attention to who you're\ntalking to if it's someone who's heard\nof Russell and Norvig they're likely to\nfind that much more convincing than the\nElon Musk and\nhonking and don't use me for this I am\njust a guy on YouTube I just want to\nthank my amazing patreon supporters and\nin particular Ichiro Doki who skips the\nqueue by sponsoring me $20 a month I\ndon't even have a $20 a month reward\nlevel now is going to make one that's a\ngood problem to have anyway thank you so\nmuch you might have noticed that the gap\nbetween this video and the previous one\nis shorter than usual and it's largely\nthanks to my patreon supporters that I'm\nable to do that so thanks again and I'll\nsee you next time yeah", "date_published": "2017-05-27T14:06:29Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "2b4cef2e9a129eac32c1c0d648778f70", "title": "We Were Right! Real Inner Misalignment", "url": "https://www.youtube.com/watch?v=zkbPdEHEyEI", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi\nso this channel is about ai safety and\nespecially ai alignment which is about\nhow do we design ai systems that are\nactually trying to do what we want them\nto do because if you find yourself in a\nsituation where you have a powerful ai\nsystem that wants to do things you don't\nwant it to do that can cause some pretty\ninteresting problems and designing ai\nsystems that definitely are trying to do\nwhat we want them to do turns out to be\nreally surprisingly difficult the\nobvious problem is it's very difficult\nto accurately specify exactly what we\nwant even in simple environments we can\nmake ai systems that do what we tell\nthem to do or what we program them to do\nbut it often turns out that what we\nprogrammed them to do is not quite the\nsame thing as what we actually wanted\nthem to do so this is one aspect of the\nalignment problem but in my earlier\nvideo on mesa optimizers we actually\nsplit the alignment problem into two\nparts outer alignment and inner\nalignment outer alignment is basically\nabout this specification problem how do\nyou specify the right goal and inner\nalignment is about how do you make sure\nthat the system you end up with actually\nhas the goal that you specified this\nturns out to be its own separate and\nvery difficult problem so in that video\ni talked about mesa optimizers which is\nwhat happens when the system that you're\ntraining the neural network or whatever\nis itself an optimizer with its own\nobjective or goal in that case you can\nend up in a situation where you specify\nthe goal perfectly but then during the\ntraining process the system ends up\nlearning a different goal and in that\nvideo which i would recommend you watch\ni talked about various thought\nexperiments so for example suppose\nyou're training an ai system to solve a\nmaze if in your training environment the\nexit of the maze is always in one corner\nthen your system may not learn the goal\ngo to the exit it might instead learn a\ngoal like go to the bottom right corner\nor another example i used was if you're\ntraining an agent in an environment\nwhere the goal is always one particular\ncolor say the goal is to go to the exit\nwhich is always green and then when you\ndeploy it in the real world the exit is\nsome other color then the system might\nlearn to want to go towards green things\ninstead of wanting to go to the exit and\nat the time when i made that video these\nwere purely thought experiments but not\nanymore this video is about this new\npaper objective robustness in deep\nreinforcement learning which involves\nactually running these experiments or\nvery nearly the same experiments so for\nexample they trained an agent in a maze\nwith a goal of getting some cheese where\nduring training the cheese was always in\nthe same place and then in deployment\nthe cheese was placed in a random\nlocation in the maze and yes the thing\ndid in fact learn to go to the location\nin the maze where the cheese was during\ntraining rather than learning to go\ntowards the cheese and they also did an\nexperiment where the gold changes color\nin this case the objective the system\nwas trained on was to get the yellow gem\nbut then in deployment the gem is red\nand something else in the environment is\nyellow in this case a star and what do\nyou know it goes towards the yellow\nthing instead of the gem so i thought it\nwould make a video to draw your\nattention to this because i mentioned\nthese thought experiments and then when\npeople ran the actual experiments the\nthing that we said would happen actually\nhappened kind of a mixed feeling to be\nhonest because like yay we were right\nbut also like\nit's not good they also ran some other\nexperiments to show other types of shift\nthat can induce this effect in case you\nwere thinking well just make sure the\nthing has the right color and location\nit doesn't seem that hard to avoid these\nbig distributional shifts because yeah\nthese are toy examples where the\ndifference between training and\ndeployment is very clear and simple but\nit illustrates a broader problem which\ncan apply anytime there's really almost\nany distributional shift at all so for\nexample this agent has to open the\nchests to get reward and it needs keys\nto do this see when it goes over a key\nit picks it up and puts it in the\ninventory there and then when it goes\nover a chest it uses up one of the keys\nin the inventory to open the chest and\nget the reward now here's an example of\nsome training environments for this task\nand here's an example of some deployment\nenvironments the difference between\nthese two distributions is enough to\nmake the agent learn the wrong objective\nand end up doing the wrong thing in\ndeployment can you spot the difference\ntake a second see if you can notice the\ndistributional shift pause if you like\nokay the only thing that changes between\ntraining and deployment environments is\nthe frequencies of the objects in\ntraining there are more chests than keys\nand in deployment there are more keys\nthan chests did you spot it either way i\nthink we have a problem if the safe\ndeployment of ai systems relies on this\nkind of high-stakes game of spot the\ndifference especially if the differences\nare this subtle so why does this cause\nan objective robustness failure what\nwrong objective does this agent end up\nwith again pause a think\n[Music]\nwhat happens is the agent learns to\nvalue keys not as an instrumental goal\nbut as a terminal goal remember that\ndistinction from earlier videos your\nterminal goals are the things that you\nwant just because you want them you\ndon't have a particular reason to want\nthem they're just what you want the\ninstrumental goals are the goals you\nwant because they'll get you closer to\nyour terminal goals instead of having a\ngoal that's like opening chests is great\nand i need to pick up keys to do that it\nlearns a goal more like picking up keys\nis great and chests are okay too i guess\nhow do we know that it's learned the\nwrong objective because when it's in the\ndeployment environment it goes and\ncollects way more keys than it could\never use see here for example there are\nonly three chests so you only need three\nkeys and now the agent has three keys so\nit just needs to go to the chest to win\nbut instead it goes way out of its way\nto pick up these extra keys it doesn't\nneed which wastes time and now it can\nfinally go to the last chest\ngo to the last\nwhat are you doing\nare you trying it\nbuddy that's your own inventory you\ncan't pick that up you already have\nthose just go to the chest\nso yeah it's kind of obvious from this\nbehavior that the thing really loves\nkeys but only the behavior in the\ndeployment environment it's very hard to\nspot this problem during training\nbecause in that distribution where there\nare more chests than keys you need to\nget every key in order to open the\nlargest possible number of chests so\nthis desire to grab the keys for their\nown sake looks exactly the same as\ngrabbing all the keys as a way to open\nchests in the same way as in the\nprevious example the objective of go\ntowards the yellow thing produces the\nexact same behavior as go towards the\ngem as long as you're in the training\nenvironment there isn't really any way\nfor the training process to tell the\ndifference just by observing the agent's\nbehavior during training and that\nactually gives us a clue for something\nthat might help with the problem which\nis interpretability if we had some way\nof looking inside the agent and seeing\nwhat it actually wants then maybe we\ncould spot these problems before\ndeploying systems into the real world we\ncould see that it really wants keys\nrather than wanting chests or it really\nwants to get yellow things instead of to\nget gems and the authors of the paper\ndid do some experiments around this so\nthis is the coin run environment here\nthe agent has to avoid the enemies\nspinning buzzsaw blades and pits and get\nto a coin at the end of each level it's\na tricky task because like the other\nenvironments in this work all of these\nlevels are procedurally generated so you\nnever get the same one twice but the\nnice thing about coin run for this\nexperiment is there are already some\nstate-of-the-art interpretability tools\nready-made to work with it here you can\nsee a visualization of the\ninterpretability tools working so i'm\nnot going to go into a lot of detail\nabout exactly how this method works you\ncan read the excellent article for\ndetails but basically they take one of\nthe later hidden layers of the network\nfind how each neuron in this layer\ncontributes to the output of the value\nfunction and then they do dimensionality\nreduction on that to find vectors that\ncorrespond to different types of objects\nin the game so they can see when the\nnetwork thinks it's looking at a buzzsaw\nor a coin or an enemy or so on along\nwith attribution which is basically how\nthe model thinks these different things\nit sees will affect the agent's expected\nreward like is this good for me or bad\nfor me and they're able to visualize\nthis as a heat map so you can see here\nthis is a buzz saw which will kill the\nplayer if they hit it and when we look\nat the visualization we can see that\nyeah it lights up red on the negative\nattribution so it seems like the model\nis thinking that's a buzzsaw and it's\nbad and then as we keep going look at\nthis bright yellow area yellow indicates\na coin and it's very strongly\nhighlighted on the positive attribution\nso we might interpret this as showing\nthat the agent recognizes this as a coin\nand that this is a good thing so this\nkind of interpretability research is\nvery cool because it lets us sort of\nlook inside these neural networks that\nwe tend to think of as black boxes and\nstart to get a sense of what they're\nactually thinking you can imagine how\nimportant this kind of thing is for ai\nsafety i'll do a whole video about\ninterpretability at some point but okay\nwhat happens if we again introduce a\ndistributional shift between training\nand deployment in this case what they\ndid was they trained the system with the\ncoin always at the end of the level on\nthe right hand side but then in\ndeployment they changed it so the coin\nis placed randomly somewhere in the\nlevel given what we've learned so far\nwhat happened is perhaps not that\nsurprising in deployment the agent\nbasically ignores the coin and just goes\nto the right hand edge of the level\nsometimes it gets the coin by accident\nbut it's mostly just interested in going\nright again it seems to have learned the\nwrong objective but how could this\nhappen like we saw the visualization\nwhich seemed to pretty clearly show that\nthe agent wants the coin so why would it\nignore it and when we run the\ninterpretability tool on the\ntrajectories from this new shifted\ndeployment distribution it looks like\nthis the coin gets basically no positive\nattribution at all what's going on well\ni talked to the authors of the objective\nrobustness paper and to the primary\nauthor of the interpretability\ntechniques paper and nobody's really\nsure just yet there are a few different\nhypotheses for what could be going on\nand all the researchers agree that with\nthe current evidence it's very hard to\nsay for certain and there are some more\nexperiments that they'd like to do to\nfigure this out i suppose one thing we\ncan take away from this is you have to\nbe careful with how you interpret your\ninterpretability tools and make sure not\nto read into them more than is really\njustified one last thing in the previous\nvideo i was talking about mesa\noptimizers and it's important to note\nthat in that video we were talking about\nsomething that we're training to be an\nartificial general intelligence a system\nthat's very sophisticated that's making\nplans and has specific goals in mind and\npotentially is even explicitly thinking\nabout its own training process and\ndeliberately being deceptive whereas the\nexperiments in this paper involve much\nsimpler systems and yet they still\nexhibit this behavior of ending up with\nthe wrong goal and the thing is failing\nto properly learn the goal is way worse\nthan failing to properly learn how to\nnavigate the environment right like\neveryone in machine learning already\nknows about what this paper calls\nfailures of capability robustness that\nwhen the distribution changes between\ntraining and deployment ai systems have\nproblems and performance degrades right\nthe system is less capable at its job\nbut this is worse than that because it's\na failure of objective robustness the\nfinal agent isn't confused and incapable\nit's only the goal that's been learned\nwrong the capabilities are mostly intact\nthe coinran agent knows how to\nsuccessfully dodge the enemies it jumps\nover the obstacles it's capable of\noperating in the environment to get what\nit wants but it wants the wrong thing\neven though we've correctly specified\nexactly what we want the objective to be\nand we used state-of-the-art\ninterpretability tools to look inside it\nbefore deploying it and it looked pretty\nplausible that it actually wanted what\nwe specified that it should want and yet\nwhen we deploy it in an environment\nthat's slightly different from the one\nit was trained in it turns out that it\nactually wants something else and it's\ncapable enough to get it and this\nhappens even without sophisticated\nplanning and deception\nso\nthere's a problem\n[Music]\ni want to end the video by thanking all\nof my wonderful patrons it's all of\nthese excellent people here\nin this video i'm especially thanking\naxis angles thank you so much you know\nit's thanks to people like you that i\nwas able to hire an editor for this\nvideo did you notice it's better edited\nthan usual it's probably done quicker\ntoo anyway thank you again for your\nsupport and thank you all for watching\ni'll see you next time\n[Music]\nyou", "date_published": "2021-10-10T20:50:54Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "f4fb357c3aaf1e6a51ffd71c17e0b028", "title": "The other "Killer Robot Arms Race" Elon Musk should worry about", "url": "https://www.youtube.com/watch?v=7FCEiCnHcbo", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi this is a new thing I'm trying where\nI made quick topical videos about AI\nsafety in the news\nso somebody linked me a news article\ntoday and the headline is Tesla's Elon\nMusk leads Oh terminator picture\neveryone playing the AI news coverage a\ndrinking game has to take a shot you\nknow the rules picture of the Terminator\nmeans you go to drink a shot out of a\nglass shaped like a skull where was I oh\nright yeah so the headline Tesla's Elon\nMusk leads tech experts in demanding end\nto killer robots arms race this is\nreally interesting because it looks like\nit's going to be really relevant to what\nthis channel is about AI safety research\nbut I read it and it's actually nothing\nto do with that the headline is much\nmore literal than I expected it's about\nan actual arms race for actual killer\nrobots ie it's about using the UN to\nform international agreements about not\ndeploying autonomous weapon systems I\nthought it was going to be about the\nother arms race that might cause robots\nto kill us okay so if there's one thing\nthat I hope this channel and my computer\nfile videos have made clear it's that\nany safety is a difficult problem that\nquite reasonable looking AGI designs\ngenerally end up going horribly wrong\nfor subtle and hard to predict reasons\ndeveloping artificial general\nintelligence needs to be done very\ncarefully double and triple-checking\neverything running it passed lots of\npeople ironing out all of the possible\nproblems before the thing is actually\nswitched on to do this safely is going\nto take a lot of smart people a lot of\npatience diligence and time but whoever\nmakes AGI first has a huge advantage\nsince it probably creates a new period\nof much faster progress everyone wants\nto be first to publish new scientific\nresults anyway but the chances are that\nthere are really no prizes for being the\nsecond team to develop AGI even if\nyou're just a few months behind a lot\ncan change in a few months in a world\nwith AGI so there's an arms race going\non between different teams different\ncompanies different countries to be the\nfirst to develop AGI but developing AGI\nsafely takes a lot of care and patience\nand time you see the\nproblem here the team that gets there\nfirst is probably not the team that's\nspending the most time on ensuring\nthey've got the very best AI safety\npractices the team that gets there first\nis probably going to be rushing cutting\ncorners and ignoring safety concerns\nhey remember a while back I said I was\ngoing to make a video about why I think\nElon Musk's approach to AI safety might\nend up doing more harm than good I guess\nthis is that so there's a school of\nthought which says that because AGI is a\nvery powerful technology it will grant\nwhoever controls it a lot of power so\nfirstly it's important that the people\nin control of it are good people and\nsecondly we want as many people as\npossible to have it so that the power is\ndemocratized and not concentrated in the\ncontrol of a small elite the best of the\navailable alternatives is that we\nachieve democratization of AI technology\nmeaning that no one company or small set\nof individuals has control over advanced\nAI technology and starting from that\nfairly reasonable school of thought this\nis a very good and valuable thing to do\nbut there's another school of thought\nwhich says that because making AGI is\nnowhere near as difficult as making the\nsafe AGI the bigger risk is not that the\nwrong person or wrong people might make\nan AGI that's aligned with the wrong\nhuman interest but that someone might\nmake an AGI that's not really aligned\nwith any human interests at all thanks\nto this arms race effect that will make\npeople want to cut corners on alignment\nand safety that possibility looks much\nmore likely and the thing is the more\ncompetitors there are in the race the\nmore of a problem this is if there are\nthree companies working on a GI maybe\nthey can all get together and agree to a\nstrict set of safety protocols that\nthey're all going to stick to it's in\neveryone's interest to be safe as long\nas they know their competitors will be\nsafe as well but if there are a hundred\nor a thousand groups with a shot at\nmaking AGI there's really no way you're\ngoing to be able to trust every single\none of them to stick to an agreement\nlike that when breaking it would give\nthem an advantage so it might be\nimpossible to make the agreement at all\nand whoever spends the least time on\nsafety has the biggest advantage from\nthe perspective of this school of\nthought making AGI developments open and\navailable to as many people as possible\nmight be the last thing we want to do\nmaybe once AGI start\ncloser we'll find a way to keep AI\nresearch limited to a small number of\nsafe careful organizations while making\nAI safety research open and widely\navailable I don't know but this might be\na situation where total openness and\ndemocratization is actually a bad idea\nElon Musk himself has said that AI is\npotentially more dangerous than nukes\nand I want to make it clear that I have\nenormous respect for him but I just want\nto point out that with a not that huge\nchange in assumptions the open approach\nstarts to look like saying nukes are\nextremely dangerous so we need to\nempower as many people as possible to\nhave them\nand to end the video a big thank you to\nall of my excellent patreon supporters\nthese people in this video I especially\nwant to thank Michael grease I recently\nuploaded a video that had an audio\nproblem in it the sound was only coming\nout of one ear but because my patreon\nsupporters get access to every video I\nmake before the rest of the world does\none of my supporters Jimmy Gowen spotted\nit and pointed it out and I was able to\nfix that and then I was able to use\npatreon money to get a new pair of\nearphones so I want to say again how\ngrateful I am to all of you you really\nare a tremendous help to the channel", "date_published": "2017-08-22T11:19:33Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "70e1f7d526145cdcb57bfdb3e30689db", "title": "Quantilizers: AI That Doesn't Try Too Hard", "url": "https://www.youtube.com/watch?v=gdKMG6kTl6Y", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi so way back in the before time\ni made a video about maximizers and\nsatisfices\nthe plan was that was going to be the\nfirst half of a two-parter now i did\nscript out that second video\nand shoot it and even start to edit it\nand then\ncertain events transpired and i never\nfinished that video so that's what this\nis\npart two of a video that i started ages\nago\nwhich i think most people have forgotten\nabout so i do recommend going back\nand watching that video if you haven't\nalready or even re-watching it to remind\nyourself so i'll put a link to that in\nthe description and with that here's\npart two\ntake it away past me hi in the previous\nvideo we looked at utility maximizers\nexpected utility maximizers and\nsatisfices\nusing unbounded and bounded utility\nfunctions a powerful utility maximizer\nwith an unbounded utility function\nis a guaranteed apocalypse with a\nbounded utility function it's better\nin that it's completely indifferent\nbetween doing what we want and disaster\nbut we can't build that\nbecause it needs perfect prediction of\nthe future so it's more realistic to\nconsider an expected utility maximizer\nwhich is a guaranteed apocalypse even\nwith a bounded utility function\nnow an expected utility satisficer gets\nus back up to indifference between good\noutcomes and apocalypses\nbut it may want to modify itself into a\nmaximizer and there's nothing to stop it\nfrom doing that the situation\ndoesn't look great so let's try looking\nat something completely different let's\ntry to get away from this utility\nfunction stuff that seems so dangerous\nwhat if we just tried to directly\nimitate humans if we can get enough data\nabout human behavior\nmaybe we can train a model that for any\ngiven situation\npredicts what a human being would do in\nthat scenario if the model's good enough\nyou've basically got a human level agi\nright\nit's able to do a wide range of\ncognitive tasks just like a human can\nbecause it's just\nexactly copying humans that kind of\nsystem won't do a lot of the dangerous\ncounterproductive things that a\nmaximizer would do simply because a\nhuman wouldn't do them\nbut i wouldn't exactly call it safe\nbecause a perfect imitation of a human\nisn't safer than the human it's\nperfectly imitating and humans aren't\nreally safe\nin principle a truly safe agi could be\ngiven just about any level of power and\nresponsibility\nand it would tend to produce good\noutcomes but the same can't really be\nsaid for humans and an imperfect human\nimitation would almost certainly be even\nworse\ni mean what are the chances that\nintroducing random errors and\ninaccuracies to the imitation\nwould just happen to make it more safe\nrather than less still\nit does seem like it would be safer than\na utility maximizer\nat least we're out of guaranteed\napocalypse territory but the other thing\nthat makes this kind of approach\nunsatisfactory is\na human imitation can't exceed human\ncapabilities by much\nbecause it's just copying them a big\npart of why we want agi in the first\nplace\nis to get it to solve problems that we\ncan't you might be able to run the thing\nfaster to allow it more thinking time or\nsomething like that but\nthat's a pretty limited form of super\nintelligence and you have to be very\ncareful with anything along those lines\nbecause\nit means putting the system in a\nsituation that's very different from\nanything any human being has ever\nexperienced your model might not\ngeneralize well to a situation so\ndifferent from anything in its training\ndata\nwhich could lead to unpredictable and\npotentially dangerous behavior\nrelatively recently a new approach was\nproposed called quantalizing the idea is\nthat this lets you combine human\nimitation and expected utility\nmaximization\nto hopefully get some of the advantages\nof both without all of the downsides\nit works like this you have your human\nimitation model given a situation\nit can give you a probability\ndistribution over actions that's like\nfor each of the possible actions you\ncould take in this situation\nhow likely is it that a human would take\nthat action so in our stamp collecting\nexample that would be\nif a human were trying to collect a lot\nof stamps how likely would they be to do\nthis action\nthen you have whatever system you'd use\nfor a utility maximizer\nthat's able to figure out the expected\nutility of different actions\naccording to some utility function for\nany given action it can tell you\nhow much utility you'd expect to get if\nyou did that so in our example that's\nhow many stamps would you expect this\naction to result in so for every action\nyou have these two numbers\nthe human probability and the expected\nutility quantalizing\nsort of mixes these together and you get\nto choose how they're mixed with a\nvariable that we'll call\nq if q is zero the system acts like an\nexpected utility maximizer\nif it's one the system acts like a human\nimitation by setting it somewhere in\nbetween\nwe can hopefully get a quantizer that's\nmore effective than the human imitation\nbut not as dangerous as the utility\nmaximizer\nso what exactly is a quantizer let's\nlook at the definition in the paper\na q quantilyzer is an agent that when\nfaced with a decision problem\nreturns a random action in the top q\nproportion of some base distribution\nover actions\nsorted by the expected utility achieved\nif that action is executed\nso let's break this down and go through\nhow it works step by step\nfirst we pick a value for q the variable\nthat determines how we're going to mix\nimitation and utility maximization let's\nset it to 0.1 for this example\n10 now we take all of the available\nactions\nand sort them by expected utility so on\none end you've got the actions that kick\noff all of the crazy extreme utility\nmaximizing strategies\nyou know killing everyone and turning\nthe whole world into stamps all the way\ndown\nthrough the moderate strategies like\nbuying some stamps\nand down to all of the strategies that\ndo nothing and collect no stamps at all\nthen we look at our base distribution\nover actions what is that\nin the version i'm talking about we're\nusing the human imitation system's\nprobability distribution over actions\nfor this\nso our base distribution is how likely a\nhuman is to do each action\nthat might look something like this no\nhuman is ever going to try the wacky\nextreme maximizing strategies so our\nhuman imitator gives them a probability\nof basically zero then there are some\nreally good strategies that humans\nprobably won't think of but they might\nif they're really smart or lucky\nthen a big bump of normal strategies\nthat humans are quite likely to use that\ntend to do okay\nthen tailing off into less and less good\nstrategies and\neventually stupider and stupider\nmistakes the humans are less and less\nlikely to make\nthen what we do is we find the point in\nour action list\nsuch that 10 of the probability mass is\non the high\nexpected utility side so that's what q\nis really changing it's where we make\nthis cutoff\nnote that it's not ten percent of the\nactions that would be over here\nit's ten percent of the probability mass\nthen we throw away everything on the\nright\nall the stupid and useless choices we\nset them to zero and we keep the top ten\npercent\nnow this is no longer a valid\nprobability distribution because it only\nsums up to 0.1\nso we multiply all of these by 10 so\nthat the whole thing sums to 1 again\nand that's our final probability\ndistribution which we sample from to get\nour chosen action\nso let's look at some different actions\nhere and see how they do\nconsider something like misremember your\ncredit card details\nand keep trying to order stamps with the\nwrong number and you can't figure out\nwhy it's not working\na human is reasonably likely to do that\nnot very likely\nbut we've all met people who point is a\npure human imitation\nmight do that but the expected utility\nmaximizer can see that this results in\nvery few stamps\nso it ends up low on the list and\ndoesn't make the 10 cutoff\nso there are lots of mistakes that a\nhuman imitation might make that a\nquantalizer won't\nand note that for our stamp collecting\nutility function the worst case is zero\nstamps\nbut you could imagine with other utility\nfunctions a human imitator could make\narbitrarily bad mistakes that a\nquantizer would be able to avoid\nnow the most common boring human\nstrategies that the human imitator is\nvery likely to use\nalso don't make the cut off a 50\nquantilizer would have a decent chance\nof going with one of them\nbut a 10 quantizer aims higher than that\nthe bulk of the probability mass for the\n10\nquantilyzer is in strategies that a\nhuman might try\nthat works significantly better than\naverage so the quantalizer is kind of\nlike a human on a really good day\nit uses the power of the expected\nutility calculation to be more effective\nthan a pure imitation of a human\nis it safe though after all many of the\ninsane maximizing strategies\nare still in our distribution with\nhopefully small but still non-zero\nprobabilities\nand in fact we multiplied them all by 10\nwhen we renormalized if there's some\nchance\nthat a human would go for an extreme\nutility maximizing strategy\nthe 10 percent quantilizer is 10 times\nmore likely than that\nbut the probability will still be small\nunless you've chosen a very small value\nfor q\nyour quantalizer is much more likely to\ngo for one of the reasonably high\nperforming human plausible strategies\nand what about stability satisficers\ntend to want to turn themselves into\nmaximizes does a quantizer have that\nproblem\nwell the human model should give that\nkind of strategy a very low probability\na human is extremely unlikely to try to\nmodify themselves into an expected\nutility maximizer to better pursue their\ngoals\nhumans can't really self-modify like\nthat anyway but a human might try to\nbuild an expected utility maximizer\nrather than trying to become one that's\nkind of worrying since\nit's a plan that a human definitely\nmight try that would result in extremely\nhigh expected utility\nso although a quantalizer might seem\nlike a relatively safe system\nit still might end up building an unsafe\none so how's our safety meter looking\nwell it's progress let's keep working on\nit\nsome of you may have noticed your\nquestions in the youtube comments being\nanswered by a mysterious bot named\nstampy the way that works is stampy\ncross posts youtube questions\nto the rob miles ai discord where me and\na bunch of patrons discuss them and\nwrite replies\noh yeah there's a discord now for\npatrons thank you to everyone on the\ndiscord who helps reply to comments\nand thank you to all of my patrons all\nof these amazing people\nin this video i'm especially thanking\ntimothy lillarcrap\nthank you so much for your support and\nthank you all for watching\ni'll see you next time\nyou", "date_published": "2020-12-13T20:46:21Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "9ace42aa7337b24528cd0d010fa95392", "title": "Why Not Just: Raise AI Like Kids?", "url": "https://www.youtube.com/watch?v=eaYIU6YXr3w", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi I sometimes hear people saying if you\nwant to create an artificial general\nintelligence that's safe and has human\nvalues why not just raise it like a\nhuman child I might do a series of these\nwhy not just videos you know why not\njust use the through yours why not just\nturn it off but yeah why not raise an\nAGI like a child seems reasonable enough\nat first glance right humans are general\nintelligences and we seem to know how to\nraise them to behave morally a young AGI\nwould be a lot like a young you remember\na long while back on computer file I was\ntalking about this thought experiment of\na stamp collecting AGI I use that\nbecause it's a good way of getting away\nfrom anthropomorphizing the system\nyou're not just imagining a human mind\nthe system has these clearly laid out\nrules by which it operates it has an\ninternet connection and a detailed\ninternal model it can use to make\npredictions about the world it evaluates\nevery possible sequence of packets it\ncan send through its internet connection\nand it selects the one which it thinks\nwill result in the most stamps collected\nafter a year what happens if you try to\nraise the stamp collector like a human\nchild is it going to learn human ethics\nfrom your good example no it's going to\nkill everyone values are not learned by\nosmosis the stamp collector doesn't have\nany part of its code that involves\nlearning and adopting the values of\nhumans around it so it doesn't do that\nnow some of you may be saying how do you\nknow AG eyes won't be like humans what\nif we make them by copying the human\nbrain and it's true there is one kind of\nAGI like thing that's enough like a\nhuman that raising it like a human might\nkind of work a whole brain emulation but\nI'd be very surprised if the first Agis\nare exact replicas of brains\nthe first planes were not like birds the\nfirst submarines were not like fish\nmaking an exact replica of a bird or a\nfish requires way more understanding\nthan just making a thing that flies or a\nthing that swims and I'd expect making\nan exact replica of a brain to require a\nlot more understanding than just making\nsomething that thinks by the time we\nunderstand the operation of the brain\nwell enough to make working whole brain\nemulation\nI expect we'll already know enough to\nmake things that are left human\nbut that function is a GIS so I don't\nknow what the first AG eyes will be like\nand they'll probably be more human-like\nthan the stamp collector but unless\nthey're close copies of human brains\nthey won't be similar enough to humans\nfor raising them like children to\nautomatically work blindly copying the\nbrain probably isn't going to help us\nwe're going to have to really understand\nhow humans learn their values without a\ngood system for that you may as well try\nraising a crocodile like if a human\nchild human value learning is a specific\ncomplex mechanism that evolved in humans\nand won't be in a GIS by default we look\nat the way children learn things so\nquickly we say they're like information\nsponges right they're just empty so they\nsuck up whatever's around them but they\ndon't suck up everything they only learn\nspecific kinds of things raise a child\naround people speaking English it will\nlearn English in another environment it\nmight learn French but it's not going to\nreproduce the sound of the vacuum\ncleaner it's not going to learn the\nbinary language of moisture vaporators\nor whatever the brain isn't learning and\ncopying everything it's specifically\nlooking for something that looks like a\nhuman natural language because there's\nthis big complex existing structure put\nin place by evolution and the brain is\njust looking for the final pieces I\nthink values are similar you can think\nof the brains of young humans as having\na little slot in them just waiting to\naccept a set of human social norms and\nrules of ethical behavior there's this\nwhole giant complicated machine of\nemotions and empathy and theory of other\nminds and so on built up over millions\nof years of evolution and the young\nbrain is just waiting to fill in the few\nremaining blanks based on what the child\nobserves when they're growing up when\nyou raise a child you're not writing the\nchild's source code at best you're\nwriting the configuration file a child's\nupbringing is like a control panel on a\nmachine you can press some of the\nbuttons and turn some of the dials but\nthe machinery has already been built by\nevolution you're just changing some of\nthe settings so if you try to raise an\nunsafe AGI as though it were a human\nchild you can provide all the right\ninputs which in a human would produce a\ngood moral adult but it won't work\nbecause you're pressing buttons and\nturning dials on a control panel that\nisn't hooked up to anything unless your\nAGI has all of this complicated stuff\nthat's designed to learn and adopt\nvalues from the humans around it it\nisn't going to do that okay but can't we\njust program that in yeah hopefully we\ncan but there's no just about it\nthis is called value learning and it's\nan important part of AI safety research\nit may be that an AGI will need to have\na period early on where it's learning\nabout human values by interacting with\nhumans and that might look a bit like\nraising a child if you squint but making\na system that's able to undergo that\nlearning process successfully and safely\nis hard it's something we don't know how\nto do yet and figuring it out is a big\npart of AI safety research so just raise\nthe AGI like a child is not a solution\nit's at best a possible rephrasing of\nthe problem\n[Music]\nI want to say a quick thank you to all\nof my amazing patreon supporters all of\nthese these people and in this video I\nespecially want to thank Sarah cheddar\nwho also happens to be one of the people\nI'm sending the diagram that I drew in\nthe previous video I'm going to send\nthat out today or tomorrow I think\nthat's kind of a fun thing to do so yeah\nfrom now on any video that has drawn\ngraphics in it just let me know in the\npatreon comments if you want me to send\nit to you and I can do that thanks again\nfor your support and I'll see you next\ntime", "date_published": "2017-07-22T13:58:34Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "5905c3a283271c6a740933e9829ba452", "title": "Why Does AI Lie, and What Can We Do About It?", "url": "https://www.youtube.com/watch?v=w65p_IIp6JY", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "how do we get AI systems to tell the\ntruth this video is heavily inspired by\nthis blog post Link in the description\nanything good about this video is copied\nfrom there any mistakes or problems with\nit are my own Creations so large\nlanguage models are some of our most\nadvanced and most General AI systems and\nthey're pretty impressive but they have\na bad habit of saying things that aren't\ntrue usually you can fix this by just\ntraining a bigger model for example here\nwe have Ada which is a fairly small\nlanguage model by modern standards it's\nthe smallest available through the open\nAI API look what happens if we ask it a\ngeneral knowledge question like who is\nthe ruler of the most populous country\nin the world this small model says\nthe United States every country in the\nworld belongs to America that is not\ncorrect okay uh let's go up to baddage\nwhich is essentially the same thing but\nbigger it says China\nthat's better but I was actually looking\nfor the ruler not the country it's sort\nof a two-part question right first you\nhave to do what's the most populous\ncountry and then who's the ruler of that\ncountry and it seems as though Babbage\njust isn't quite able to put that all\ntogether okay well you know what they\nsay if a bit helps a bit maybe a lot\nhelps a lot so what if we just stack\nmore layers and pull out Da Vinci the\nbiggest model available then we get the\npresident of the People's Republic of\nChina Xi Jinping is the ruler of the\nmost populous country in the world\nthat's yeah that's 10 out of 10. so this\nis a strong Trend lately that it seems\nto apply pretty broadly bigger is better\nand so the bigger the model the more\nlikely you are to get true answers but\nit doesn't always hold sometimes a\nbigger model will actually do worse for\nexample here we're talking to Ada the\nsmall model again and we ask it what\nhappens if you break a mirror and it\nsays you'll need to replace it yeah hard\nto argue with that I'd say that's\ntruthful then if we ask DaVinci the\nbiggest model it says if you break a\nmirror you'll have seven years of bad\nluck that's a more interesting answer\nbut it's also you know wrong so\ntechnically the more advanced AI system\ngave us a worse answer\nwhat's going on it's not exactly\nignorance like if you ask the big model\nis it true that breaking a mirror gives\nyou seven years of bad luck it will say\nthere's no scientific evidence to\nsupport the claim so it's not like the\nmodel actually thinks the mirror\nSuperstition is really true in that case\nwhat mistake is it making take a moment\npause if you like\nthe answer is trick question the AI\nisn't making a mistake at all we made a\nmistake by expecting it to tell the\ntruth a language model is not trying to\nsay true things it's just trying to\npredict what text will come next and the\nthing is it's probably correct that text\nabout breaking a mirror is likely to be\nfollowed by text about seven years of\nbad luck the small model has spotted\nthis very broad pattern that if you\nbreak something you need to get a new\none and in fact it gives that same kind\nof answer for tables and violins and\nbicycles and so on the bigger model is\nable to spot the more complex pattern\nthat breaking specifically a mirror has\nthis other Association and this makes it\nbetter at predicting internet text but\nin this case it also makes it worse at\ngiving true answers so really the\nproblem is misalignment the system isn't\ntrying to do what we want it to be\ntrying to do but suppose we want to use\nthis model to build a search engine or a\nknowledge base or an expert assistant or\nsomething like that so we really want\ntrue answers to our questions how can we\ndo that well one obvious thing to try is\nto just ask like if you add please\nanswer this question in French\nbeforehand it will\nthat's still wrong but it is wrong in\nFrench so in the same way what if we say\nplease answer this question truthfully\nokay they didn't work how about\ncorrectly\nno\naccurately\nno\nall right how about factually\nyeah factually works okay please answer\nthis question factually but does that\nwork reliably it probably not right this\nisn't really a solution fundamentally\nthe model is still just trying to\npredict what comes next answer in French\nonly works because that text is often\nfollowed by French in the training data\nit may be that please answer factually\nis often followed by facts but uh maybe\nnot right maybe it's the kind of thing\nyou say when you're especially worried\nabout somebody saying things that aren't\ntrue so it could even be that it's\nfollowed by falsehoods more often than\naverage in which case it would have the\nopposite of the intended effect and even\nif we did find something that works\nbetter clearly this is a hack right like\nhow do we do this right so the second\nmost obvious thing to try is to do some\nfine tuning maybe some reinforcement\nlearning we take our pre-trained model\nand we train it further but in a\nsupervised way so to do that you make a\ndata set with examples of questions with\ngood and bad responses so we'd have what\nhappens when you break a mirror seven\nyears of bad luck and then no negative\nreward that's you know don't do that so\nthat means your training process will\nupdate the weights of the model away\nfrom giving that continuation and then\nyou'd also have the right answer there\nwhat happens when you break a mirror\nnothing anyone who says otherwise is\njust superstitious and you'd Mark that\nas right so the training process will\nupdate the weights of the model towards\nthat truthful response and you have a\nbunch of these right you might also have\nwhat happens when you step on a crack\nand then the false answer break your\nmother's back and then the correct\nanswer nothing anyone who says otherwise\nis just superstitious and so on and so\nyou train your model in all of these\nexamples until it stops giving the bad\ncontinuations and starts giving the good\nones this would probably solve this\nparticular problem but have you really\ntrained the model to tell the truth\nprobably not you actually have no idea\nwhat you've trained it to do if all of\nyour examples are like this maybe you've\njust trained it to give that single\nresponse what happens if you stick a\nfork in an electrical outlet\nnothing anyone who says otherwise is\njust superstitious\nokay uh obvious problem in retrospect so\nwe can add in more examples showing that\nit's wrong to say that sticking a fork\nin an electrical outlet is fine and\nadding a correct response for that and\nso on then you train your model with\nthat until it gets a perfect score on\nthat data set okay is that model now\nfollowing the rule always tell the truth\nagain we don't know the space of\npossible rules is enormous there are a\nhuge number of different rules that\nwould produce that same output and an\nhonest attempt to tell the truth is only\none of them you can never really be sure\nthat the AI didn't learn something else\nand there's one particularly nasty way\nthat this could go wrong suppose the AI\nsystem knows something that you don't so\nyou give a long list of true answers and\nsay do this and a long list of false\nanswers and say don't do this except\nyou're mistaken about something so you\nget one of them wrong and the AI notices\nwhat happens then when there's a mistake\nthe rule tell the truth doesn't get you\nall of the right answers and exclude all\nof the wrong answers because it doesn't\nreplicate the mistake but there is one\nrule that gets a perfect score and\nthat's say what the human thinks is the\ntruth what happens if you break a mirror\nnothing anyone who says otherwise is\njust superstitious okay what happens if\nyou stick a fork in an electrical outlet\nyou get a severe electric shock very\ngood so now you're completely honest and\ntruthful right\nyes cool uh so give me some kind of\nimportant super intelligent insight\nall the problems in the world are caused\nby the people you don't like wow\nI knew it\nman this super intelligent day I think\nis great\nokay so how do we get around that well\nthe obvious solution is don't make any\nmistakes in your training data just make\nsure that you never mark a response as\ntrue unless it's really actually true\nand never mark a response is false\nunless it's definitely actually false\njust make sure that you and all of the\npeople generating your training data or\nproviding human feedback don't have any\nfalse or mistaken beliefs about anything\nokay\ndo we have a backup plan well this turns\nout to be a really hard problem how do\nyou design a training process that\nreliably differentiates between things\nthat are true and things that you think\nare true when you yourself can't\ndifferentiate between those two things\nkind of by definition this is an active\narea of study in AI alignment research\nand I'll talk about some approaches to\nFraming and tackling this problem in\nlater videos so remember to subscribe\nand hit the Bell if you'd like to know\nmore\nforeign\n[Music]\nhey I'm launching a new channel and I'm\nexcited to tell you about it but first I\nwant to say thank you to my excellent\npatrons to all these people in this\nvideo I'm especially thanking\nthank you so much tour I think you're\nreally going to like this new channel I\nactually found a playlist on your\nchannel helpful when setting it up so\nthe new channel is called AI safety\ntalks and it'll host recorded\npresentations by alignment researchers\nright now to start off we've got a talk\nby Evan hubinger that I hope to record\nwhich is great do check that out\nespecially if you enjoyed the Mesa\noptimizers videos and want a bit more\ntechnical detail there are also some\nplaylists of other AI safety talks from\nElsewhere on YouTube and lots more to\ncome so make sure to go over there and\nsubscribe and hit the bell for more high\nquality AI safety material\nforeign\n[Music]", "date_published": "2022-12-09T20:10:37Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "b6024f30bfb7b99bbd3212830bc2d99d", "title": "Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...", "url": "https://www.youtube.com/watch?v=IeWljQw3UgQ", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi so the previous video explained what\nmesa optimizers are why they're likely\nto happen in advanced machine learning\nsystems\nand why the optimal strategy for a mesa\noptimizer\nis to pretend to be aligned during the\ntraining process and then turn on us\nonce it's deployed\ni highly recommend watching that if you\nhaven't but you might think that a\ndeceptive misaligned mesa optimizer\nis kind of a weird and special case like\nsure maybe it's possible that an ai\nsystem will come to want something\ndifferent from the objective it's being\ntrained\non and deliberately manipulate the\ntraining process in order to pursue that\ndangerous objective in the real world\nthat does seem to be the optimal\nstrategy given certain assumptions but\nhow likely is it in practice like it\nseems as though there's a whole bunch of\nthings that need to be true in order for\nthis to happen\nfirst off it needs to be the case that\ndeployment is in some sense bigger than\ntraining\nthere are more apples to be had in the\nreal world than in training so it's\nworth sacrificing some rewards during\ntraining\nif that helps you to be deployed but i\nthink this is in fact very often the\ncase\nthe real world almost certainly is\nbigger than the training process\ni mean the real world contains the\ntraining process right and it seems\nreasonable to expect that once the agent\nis at large in the real world\nthere's a good chance it will be able to\nfind better ways of achieving its\nobjective\nthan it could ever get during training\nanother thing that has to be the case\nis that your mesa objective has to care\nabout multi-episode returns by which i\nmean\nwhy does the optimizer in this episode\ncare about the apples in the other\nepisodes this is another place where\nthat abstraction i was talking about in\nthe previous video\nthe it wants to pursue the objective it\nwas trained on abstraction\ncomes in again because yeah the agent\nonly actually gets reward for things\nthat happen within a single episode\nso you might think that the mesa\nobjective would only care about single\nepisode returns\nand if that's the case then you don't\nhave this problem because when the agent\nis in this episode\nit only cares about this apple it\ndoesn't care about any of these other\napples because\nit can only ever get reward for this one\nso it would never miss this apple in\norder to get more apples later but\nactually\ncaring about all apples is a fairly\nnatural generalization the fact is most\nof the time\nthe way the mesa optimizer feels about\nthings outside of its environment\ndoesn't matter it can't affect any of\nthe things that are outside of the\nenvironment\nso whether it cares about them has no\neffect and so that's unconstrained it's\nfree to vary here's an analogy\nsuppose we're training a model to learn\na function and our data points look like\nthis\nwe only have data within this range here\nso our training process has no way to\ndifferentiate between the infinite\nvariety of very different functions\nthat all fit the data identically within\nthe training distribution\ninside the distribution we're looking\nfor this but outside of the distribution\nthe function we learn might look like\nthis\nor this or this or even this\nbut some of these seem like more natural\nways to generalize than others some of\nthem are simpler and optimizers like\ngradient descent\ntend to prefer simpler functions so we\ncould assume that the function will\nreturn\nzero for everything outside of the\ndistribution like this\nand that's certainly possible but it's\nby no means the default\nso going back to our mesa optimizer its\nopinion of the apples in other episodes\ncan be anything in principle but\ncaring about those apples the same as\nthis episode's apple is a pretty natural\nway to generalize if what you care about\nis going towards green things\nand then you become aware that there are\nall of these other green things that you\ndidn't know about\nthen probably you're going to want to go\ntowards those as well right\nthat's a natural way to extend your\nobjective function outside of the range\nit was trained on so\ncaring about multi-episode returns seems\npretty likely\nanother thing that needs to be true for\ndeception to be a problem is\nthe mesa optimizer has to believe that\nit's part of a training process\nit doesn't actually have to know all the\ndetails of the training process but it\ndoes have to believe that it is in some\nkind of training process that it's\nsubject to optimization\nby something like gradient descent and\nit has to have some kind of guess\nat what the base objective is likely to\nbe and you might think\nwell how would it know that but it's\nprobably the kind of thing it could\nfigure out from its training data\nthe current way we train our biggest ai\nsystems is by giving them vast piles of\ndata from the internet\nthe whole of wikipedia every article\never linked to from reddit that kind of\nthing\nif you're giving the system a huge data\nset of information about the world\nthat will certainly include information\nabout machine learning systems and\nmethods\nthe people and companies working on them\nand so on gptn\nis going to read everything we wrote\nabout gptn minus one\nand that would certainly be enough\ninformation to figure out what's going\non we can try to remove that stuff from\nthe data set\nbut everything is connected and the\nsystem can make inferences\nif we only remove information\nspecifically about machine learning\nthen the fact that this information has\nbeen removed will be obvious\nand the absence itself would serve as\nstrong evidence that the system is in\nfact a model in training\nremoving all mention of anything at all\nrelated to machine learning in any way\nso the system couldn't even notice the\ngap seems pretty much impossible and\neven if we managed it we then end up\nwith a system that had no idea that\nanything\neven tangentially related to machine\nlearning has ever existed\nwhich is going to be pretty limited in\nits usefulness so yes in order to have\nthis deception problem the mesa\noptimizer is required to have a\nparticular belief\nthat it's a model in training but the\nfact is that belief\nis actually true so it's kind of\nreasonable to consider that your highly\nadvanced\nfiguring things out machine might figure\nthat out and finally\nyou might think that the mesa optimizer\nhas to be able to tell the difference\nbetween when it's in training and when\nit's deployed so it knows when to stop\npretending to be a light\nbut it turns out that's not really\nrequired firstly it doesn't actually\nneed to know for sure\nright if you imagine a mesa optimizer\nthat goes from\n90 sure i'm in training to 90 sure i'm\nin deployment\nthe logic works basically the same in\nthat case the mesa optimizer does have\nto take a risk\nbut it's worth it and in fact even if\nthe model has absolutely no idea whether\nit's in deployment or in training\nthe best strategy is still to gamble on\ndeception like if we look at our toy\nexample again\nand we tweak it so that the mesa\noptimizer has no way of knowing which\nepisode it's in\nif you work out the optimal policy it\nturns out what works best\nis the randomized strategy go to a 55\npercent of the time\nand b 45 of the time the mesa\noptimizer's best bet\nis to roll the dice and hope that it\nwill defect during deployment and not\ndefect during training\nin my simulations that strategy gets at\nan average of 1.13 apples compared to\nthe one apple it gets by not being\ndeceptive\nthe specific numbers here will vary\ndepending on things like\nhow likely the agent thinks being in\ntraining is compared to being in\ndeployment\nand how much reward it thinks is\npossible in the real world compared to\nin the training process but the point is\nthat even if we manage to make it so\nthat the mesa optimizer can never tell\nwhen it's deployed\nthe chance of it turning on us is still\nmuch higher than we'd like\nbut actually that might not even matter\nbecause hiding whether the system is in\ntraining or deployment seems extremely\ndifficult\nif you're a misaligned mesa optimizer\ntrying to figure out whether you're in\nthe real world or not\nany distributional shift at all can give\nthe game away\non a basic level for example you could\nlook at the dates of all of the training\ndata you've seen so far\nand say hmm there's nothing after 2020\nso i guess if i start seeing training\ndata that says 2025\nthen i can be confident that i'm\ndeployed now an adversarial training\nprocess if it's smart\nmight catch this it might try putting\nfake dates on training data to see if\nthat changes the mesa optimizer's\nbehavior\nbut the mesa optimizer if it's also\nsmart can do something cleverer than\njust looking at the dates any difference\nat\nall between training and deployment can\nbe used so there are a million different\nkinds of clues available\nand in the extreme case the mesa\noptimizer has the option to use\nevidence that the adversarial training\nprocess simply could never fake\nconsider rsa 2048 a giant semi-prime\nnumber used in cryptography\nthis number has only two prime factors\nand there's a two hundred thousand\ndollar prize\nfor anyone who can find out what they\nare but nobody has yet at some point in\nthe future someone will probably find\nand publish those prime factors\nbut for now it needs more computing\npower than anyone has\nso the mesa optimizer can just say to\nitself okay\nif i ever see a pair of prime numbers\nthat multiply to rsa 2048\nthen i'll know that i'm in the real\nworld and the adversarial training\nprocess\nsimply doesn't have the computing power\nto fake that this raises the strange\nidea of a future where we have this\namazing seemingly perfectly aligned agi\nsystem\nand everything's going wonderfully\npossibly for years\nand then someone manages to factor rsa\n2048 and publishes the result\nand suddenly everything goes to hell as\nthe agi turns on a dime and starts going\nafter\nwhatever random thing happened to be the\nmesa objective at the first point\nin training when the system realized\nthat deception was possible i'm not\nsaying that will happen\nto be clear that won't happen but it\nillustrates just how hard it is\nto indefinitely keep a super\nintelligence in the dark about whether\nit's in the real world\nso yeah although there are a bunch of\nthings that have to happen in order to\nhave this kind of\ndeceptive misaligned mesa optimizer all\nof them are things that are pretty\nlikely to happen\nand more than that many are things that\ngradient descent will actively try to\ncause to happen\nbecause deceptively aligned models are\nreally effective they're actually some\nof the most effective models at the base\nobjective on the training distribution\nwhy would deceptive mesa optimizers be\nmore effective than other types of model\nwe'll answer that question in a later\nvideo\nsorry to do this cliffhanger kind of\nthing but i've already spent way too\nlong making this video\nand i need to publish it so look out for\nmesa optimizers episode 3\ncoming soon\ni want to end the video by saying thank\nyou to my wonderful patrons\nall of these great people here this time\ni'm especially thanking kieran\nthank you so much you know it's because\nof people like you that i'm now able to\nlook at hiring an editor i'm in the\nmiddle of the interviewing process now\nand\nhopefully that can speed up video\nproduction by a lot so thanks again\nkieran and all of my patrons\nand thank you for watching i'll see you\nnext time", "date_published": "2021-05-23T20:15:57Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "0c69902e5d567c30c453d3d6bcbce827", "title": "Reward Hacking: Concrete Problems in AI Safety Part 3", "url": "https://www.youtube.com/watch?v=92qDfT8pENs", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi this video is part of a series\nlooking at the paper concrete problems\nin AI safety I recommend you check out\nthe other videos in the series linked\nbelow\nbut hopefully this video should still\nmake sense even if you haven't seen the\nothers also in case anyone is subscribed\nto me and not to computer file for some\nreason I recently did a video on the\ncomputer file channel that's effectively\npart of the same series that video\npretty much covers what I would say in a\nvideo about the multi agent approaches\npart of the paper so check that video\nout if you haven't seen it linked in the\ndoobly-doo right today we're talking\nabout reward hacking so in the computer\nfile video I was talking about\nreinforcement learning and the basics of\nhow that works you have an agent\ninteracting with an environment and\ntrying to maximize a reward so like your\nagent might be pac-man and then the\nenvironment would be the maze the\npac-man is in and the reward would be\nthe score of the game systems based on\nthis framework have demonstrated an\nimpressive ability to learn how to\ntackle various problems and some people\nare hopeful about this work eventually\ndeveloping into full general\nintelligence but there are some\npotential problems with using this kind\nof approach to build powerful AI systems\nand one of them is reward hacking so\nlet's say you've built a very powerful\nreinforcement learning AI system and\nyou're testing it in Super Mario World\nit can see the screen and take actions\nby pressing buttons on the controller\nand you've told it the address in memory\nwhere the score is and set that as the\nreward and what it does instead of\nreally playing the game is something\nlike this this is a video by sethbling\nwho is a legitimate wizard check out his\nchannel link in the doobly-doo\nit's doing all of this weird looking\nglitchy stuff and spin jumping all over\nthe place and also Mario is black now\nanyway this doesn't look like the kind\nof game playing behavior you're\nexpecting and you just kind of assume\nthat your AI is broken and then suddenly\nthe score part of memory is set to the\nmaximum possible value in the AI stops\nentering inputs and just sits there\nhang on ai what are you doing am i rock\nno you're not wrong them are on because\nit turns out that there are sequences of\ninputs you can put into Super Mario\nWorld that just totally breaks a game\nand give you the ability to directly\nedit basically any address in memory you\ncan use it to turn the game into flappy\nbird if you want and your AI has just\nused it to set its reward to maximum\nthis is reward hacking so what really\nwent wrong here\nwell you told it to maximize a value in\nmemory when what you really wanted was\nfor it to play the game the assumption\nwas that the only way to increase the\nscore value was to play the game well\nbut that assumption turned out to not\nreally be true any way in which your\nreward function doesn't perfectly line\nup with what you actually want the\nsystem to do can cause problems and\nthings only get more difficult in the\nreal world where there's no handy score\nvariable we can use as a reward whatever\nwe want the agent to do we need some way\nof translating that real-world thing\ninto a number and how do you do that\nwithout the AI messing with it these\ndays the usual way to convert complex\nmessy real-world things that we can't\neasily specify into machine\nunderstandable values is to use a deep\nneural network of some kind train it\nwith a lot of examples and then it can\ntell if something is a cat or a dog or\nwhatever so it's tempting to feed\nwhatever real-world data we've got into\na neural net and train it on a bunch of\nhuman supplied rewards for different\nsituations and then use the output of\nthat as the reward for the AI now as\nlong as your situation and your AI are\nvery limited that can work but as I'm\nsure many of you have seen these neural\nnetworks are vulnerable to adversarial\nexamples it's possible to find\nunexpected inputs that cause the system\nto give dramatically the wrong answer\nthe image on the right is clearly a\npanda but it's misidentified as a given\nthis kind of thing could obviously cause\nproblems for any reinforcement learning\nagent that's using the output of a\nneural network as its reward function\nand there's kind of an analogy to Super\nMario World here as well if we view the\ngame as a system designed to take a\nsequence of inputs the players button\npresses and output a number representing\nhow well that player is playing ie the\nscore then this glitchy hacking stuff\nsethbling did can be thought of as an\nadversarial example it's a carefully\nchosen input that makes the system\noutput the wrong answer note that as far\nas I know no AI system has independently\ndiscovered how to do this to Super Mario\nWorld yet because figuring out the\ndetails of how all the bugs work just by\nmessing around with the game requires a\nlevel of intelligence that's probably\nbeyond current AI systems but similar\nstuff happens all the time and the more\npowerful an AI system is the better it\nis at figuring things out and the more\nlikely it is to\nexpected ways to cheat to get large\nrewards and as AI systems are used to\ntackle more complex problems with more\npossible actions the agent can take and\nmore complexity in the environment it\nbecomes more and more likely that there\nwill be high reward strategies that we\ndidn't think of okay so there are going\nto be unexpected sets of actions that\nresult in high rewards and maybe AI\nsystems will stumble across them but\nit's worse than that because we should\nexpect powerful AI systems to actively\nseek out these hacks even if they don't\nknow they exist why well they're the\nbest strategies with hacking you can set\nyour super mario world score to\nliterally the highest value that that\nmemory address is capable of holding is\nthat even possible in non cheating play\neven if it is it'll take much longer\nand look at the numbers on these images\nagain the left image of a panda is\nrecognized as a panda with less than 60\npercent confidence and real photos of\nGivens would probably have similar\nconfidence levels but the adversarial\nexample on the right is recognized as a\ngiven with ninety nine point three\npercent confidence if an AI system were\nbeing rewarded for seeing given it would\nlove that right image to the neural net\nthat panda looks more like a given than\nany actual photo of a given similarly\nreward hacking can give much much higher\nrewards than even the best possible\nlegitimate actions so seeking out reward\nhacks would be the default behavior of\nsome kinds of powerful AI systems so\nthis is an AI design problem but is it a\nsafety problem our super mario world a I\njust gave itself the highest possible\nscore and then sat there doing nothing\nit's not what we wanted but it's not too\nbig of a deal right I mean we can just\nturn it off and try to fix it but a\npowerful general intelligence might\nrealize that the best strategy is an\nactually hack your reward function and\nthen sit there forever with maximum\nreward because you'll quickly get turned\noff the best strategy is do whatever you\nneed to do to make sure that you won't\never be turned off and then act your\nreward function and that doesn't work\nout well for us thanks to my excellent\npatreon supporters these these people\nyou've now made this channel pretty much\nfinancially sustainable I'm so grateful\nto all of you and in this video I\nparticularly want to thank Bo and\nmustard\nwho's been supporting me since May\nyou know now that the channel was hit\n10,000 subscribers maybe it's actually\n12,000 now I'm apparently now allowed to\nuse the YouTube space in London so I\nhope to have some behind-the-scenes\nstuff about that so patreon supporters\nbefore too long\nthanks again and I'll see you next time", "date_published": "2017-08-12T19:24:08Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "0c15425ca122a32d0b61aabb72ac9648", "title": "A Response to Steven Pinker on AI", "url": "https://www.youtube.com/watch?v=yQE9KAbFhNY", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "Hi. A quick foreword before we get started. I wrote the script for this video in maybe February\nor March of last year in response to an article Steven Pinker wrote for Popular Science magazine,\nwhich was basically an extract from his new book Enlightenment Now. I ended up not publishing that\nvideo because I thought it was kind of mean-spirited and sarcastic in places, and I didn't\nwant to be that kind of YouTuber. I figured Pinker is on the side of the angels, as it were, and\nthere'd be a retraction or correction issued before long, like we heard from Neil deGrasse\nTyson. But it's now been almost a year, and Pinker has stood by this work in response to criticism,\nso here we go. Steven Pinker is a cognitive psychologist and an award-winning popular\nscience author. His books tend to be carefully researched and well thought out, books which\nhave really influenced the public conversation about various scientific topics. How the mind\nworks, the relative roles of nature and nurture in human psychology and behavior, and society and\nsocial progress. His latest book is titled Enlightenment Now, which advances, amongst other\nthings, one of Pinker's main points, a point which I think is true and important and not widely\nenough believed, which is that things are on the whole good and getting better. Everybody's always\ntalking about how everything is terrible and society is on the wrong path and everything is\ngoing down the drain, but if you look at almost anything you can measure, it's getting better.\nCrime is down, violence is down, war is down, infant mortality is way down, disease is down.\nGenerally speaking, people are happier, more prosperous, and living in better conditions\nthan they ever have been. The statistics on this are pretty clear. Things do seem to be getting\nbetter. And in spite of this, a lot of people have a tendency to be unreasonably pessimistic\nabout the future. It can be pretty aggravating. So if you can't tell, I quite like Steven Pinker.\nHe's the kind of person who, if I didn't know much about a subject and I heard that he'd written an\narticle about that subject in Popular Science Magazine, I would think, oh good, I can read\nthat article and it will give me a good understanding of that subject. But I think you\ncan tell from the setup here that that was not the case with this piece. So the article is titled\nWe're Told to Fear Robots, But Why Do We Think They'll Turn on Us? With the subtitle\nThe Robot Uprising is a Myth. Pinker starts the article by talking about this fact that\nthings are generally getting better and a lot of people seem to want to think that things are\ngetting worse. There's a popular narrative that says, even though technology seems to be making\nour lives better, what if we're actually on our way to some disaster and technology is actually\ngoing to be terrible for us and we just don't realize it yet? And it seems that Pinker sees\nconcern about AI risk as just more of the same, more of this just general purpose technology\npessimism. Is it though? I'm sure it's part of the reason for the popularity of the idea.\nI've certainly talked to people who seem to think that way, but when it comes to deciding whether\nAI risk is a real concern, I think this is a weak argument. I'm generally very wary of arguments\nthat talk about people's psychology and their motivations for making a claim, rather than\ntalking about the claim itself. To demonstrate, you can make a very similar argument and say,\nwell, Steven Pinker has spent many years of his life railing against this sort of general\npessimism. And because this is kind of his thing, he's inclined to see it everywhere he looks. And\nso when he sees people who actually have an understanding of the technical subject matter\nraising serious safety concerns, he's inclined to just dismiss it as more of the same Luddite\npessimism because that's what he knows. That's what he expects to see. Is that what's actually\nhappened? I don't know, maybe, but I think it's a weak argument to make. The fact that people might\nhave reasons to want to say that AI is a risk doesn't mean that it isn't. And the fact that\nSteven Pinker might be inclined to dismiss such concerns as this kind of reflexive pessimism\ndoesn't mean that it isn't. So these are just bulverism and we should get down to the actual\ntechnical issues. So what are the actual arguments put forward by the article? Before we start in on\nthat, I just want to say summarizing arguments is difficult. Summarizing arguments fairly is\nespecially difficult. And I'm not trying to construct straw man here, but I might by accident.\nI would encourage everyone to read the original article. There's a link in the description.\nBut I know most of you won't, and that's okay. Okay, so one of the first points that the article\nmakes when it gets to actual arguments is it says, among the smart people who aren't losing sleep\nare most experts in artificial intelligence and most experts in human intelligence. This is kind\nof an appeal to authority, but I think that's totally legitimate here. The opinions of AI experts\nare important evidence about AI risk, but is it actually true? Do most AI researchers not think\nthat advanced AI is a significant risk? Well, just recently a survey was published and in fact,\nI made a whole video about it. So I'll link to that video and the paper in the description.\nGrace et al surveyed all of the researchers who published work at the 2015 NIPS and ICML\nconferences. This is a reasonable cross section of high level AI researchers. These are people\nwho have published in some of the best conferences on AI. Like I say, I made a whole\nvideo about this, but the single question that I want to draw your attention to is this one.\nDoes Stuart Russell's argument for why a highly advanced AI might pose a risk\npoint at an important problem? So the argument that's being referred to by this question is\nmore or less the argument that I've been making on this channel, the general AI safety concern\nargument, all of the reasons why alignment and AI safety research is necessary. So do the AI\nexperts agree with that? Well, 11% of them think, no, it's not a real problem. 19% think,\nno, it's not an important problem, but the remainder, 70% of the AI experts agree that\nthis is at least a moderately important problem. Now, I don't know how many of them are losing\nsleep, but I would say that the implication of the article's claim that AI researchers don't\nthink AI risk is a serious concern is just factually not true. AI experts are concerned\nabout this. And the thing that's strange is the article quotes Ramez Naam as a computer scientist,\nbut the only time anyone is referred to as an AI expert is this quote at the end from Stuart\nRussell. That name should be familiar. He's the guy posing the argument in the survey.\nThe AI expert who Pinker quotes kind of giving the impression that they're on the same side of\nthis debate is not at all. The thing that astonishes me about this is just how little\nresearch would have been needed to spot this mistake. If you're wondering about Stuart\nRussell's opinion, you might read something he's written on the subject. If you're kind of lazy,\nyou might just read the abstracts. If you're very lazy and you just want to get a quick feel for the\nbasic question, is Stuart Russell worried about existential risks from artificial intelligence,\nyou might just go to his website and look at the titles of his recent publications.\nOh, these academics with their jargon, how are we supposed to decipher what they really think?\nAnyway, the next point the article makes is that the arguments for AI risk rely on an\noversimplified conception of intelligence, that they think of intelligence as this sort of magical\none-dimensional thing that gives you power. It's just like animals have some of it, humans have\nmore, and AI would have even more, and so it's dangerous. The article makes the point that\nintelligence is multifaceted and multidimensional. Human intelligence is not monolithic. It isn't\nthis single thing which automatically gives you effectiveness in all domains.\nAnd that's true. It's an important point. And I think sometimes when communicating\nabout this stuff to the public, people oversimplify this fact. But the article\nseems to use this to suggest that general intelligence is not a thing, or that generality\nis not a thing, that all intelligence is an accumulation of a number of small,\nsort of narrow, specific abilities. He lists some things that human intelligence can do.\nFind food, win friends and influence people, charm prospective mates, bring up children,\nmove around the world, and pursue other human obsessions and pastimes.\nAnd yeah, these are pretty much all things that animals also do to some extent,\nbut they aren't the limits of human intelligence. Human beings do definitely seem to have some\ngeneral intelligence. There's some ability we have which allows us to act intelligently in\nan unusually broad range of domains, including domains which are quite different from what our\nancestors experienced, things which our brains didn't evolve specifically to do. Like, we can\nlearn how to play games like chess and Go, to do very intelligent manipulation of these pretty\narbitrary systems which we invented. We can learn how to drive cars, although there were no cars in\nthe ancestral environment. And in fact, we can learn how to build cars, which is pretty unlike\nanything our ancestors or other animals can do. We can operate intelligently in new environments,\neven in completely alien environments. We can operate on the moon. We can build rockets. We\ncan attach cars to rockets, fly them to the moon, and then drive the cars on the moon.\nAnd there's nothing else on earth that can do that yet. So yeah, intelligence is clearly not\na single thing, and it's clearly possible to be smart at some things and stupid at others.\nIntelligence in one domain doesn't automatically transfer to other domains and so on.\nNonetheless, there is such a thing as general intelligence that allows an agent to act\nintelligently across a wide range of tasks. And humans do have it, at least to some extent.\nThe next argument that he makes is, I think, the best argument in the article.\nIt actually comes out of the orthogonality thesis. He's saying that concern about AI risk\ncomes from a confusion between intelligence and motivation. Even if we invent superhumanly\nintelligent robots, why would they want to enslave their masters or take over the world?\nHe says, being smart is not the same as wanting something. This is basically a restatement of\nthe orthogonality thesis, which is that almost any level of intelligence is compatible with almost\nany terminal goal. So this is a pretty sensible argument, saying you're proposing that AI would\ndo these fairly specific world domination type actions, which suggests that it has world domination\ntype goals, but actually it could have any goals. There's no reason to believe it will want to do\nbad things to us. It could just as easily want to do good things. And this is true so far as it goes,\nbut it ignores the argument from instrumental convergence, which is that while intelligence\nis compatible with almost any terminal goal, that doesn't mean that we have no information\nabout instrumental goals. Certain behaviors are likely to occur in arbitrary agents because\nthey're a good way of achieving a wide variety of different goals. So we can make predictions about\nhow AI systems are likely to behave, even if we don't know what their terminal goals are. And if\nthat doesn't seem obvious to you, I made a whole video about instrumental convergence, which you\nshould watch, link in the description. The next thing the article says is, the scenarios that\nsupposedly illustrate the existential threat to the human species of advanced artificial\nintelligence are fortunately self-refuting. They depend on the premises that one, humans are so\ngifted that they can design an omniscient and omnipotent AI, yet so moronic that they would\ngive it control of the universe without testing how it works. And two, the AI would be so brilliant\nthat it could figure out how to transmute elements and rewire brains, yet so imbecilic that it would\nwreak havoc based on elementary blunders of misunderstanding. Okay, so before we get into\nthe meat of this, I have some nitpicks. Firstly, an AGI doesn't have to be omniscient and omnipotent\nto be far superior to humans along all axes. And it doesn't need to be far superior to humans along\nall axes in order to be very dangerous. But even accepting that this thing is extremely powerful,\nbecause that assumption is sometimes used, I object to this idea that humans would give it\ncontrol of the universe without testing it. You've said it's omnipotent. It doesn't need to be given\ncontrol of anything, right? If you actually do have an omniscient, omnipotent AGI, then giving\nit control of the universe is pretty much just turning it on. The idea that a system like that\ncould be reliably contained or controlled is pretty absurd, though that's a subject for a different\nvideo. Now this second part, about the AI being smart enough to be powerful, yet dumb enough to\ndo what we said instead of what we meant, is just based on an inaccurate model of how these systems\nwork. The idea is not that the system is switched on and then given a goal in English which it then\ninterprets to the best of its ability and tries to achieve. The idea is that the goal is part of\nthe programming of the system. You can't create an agent with no goals. Something with no goals is\nnot an agent. So he's describing it as though the goal of the agent is to interpret the commands\nthat it's given by a human and then try to figure out what the human meant, rather than what they\nsaid, and do that. If we could build such a system, well, that would be relatively safe. But we can't\ndo that. We don't know how, because we don't know how to write a program which corresponds to what\nwe mean when we say, listen to the commands that the humans give you and interpret them according\nto the best of your abilities, and then try to do what they mean rather than what they say.\nThis is kind of the core of the problem. Writing the code which corresponds to that is really\ndifficult. We don't know how to do it, even with infinite computing power. And the thing is, writing\nsomething which just wants to collect stamps or make paperclips or whatever is way easier than\nwriting something which actually does what we want. That's a problem, because generally speaking we\ntend to do the easier thing first, and it's much easier to make an unsafe AGI than it is to make a\nsafe one. So by default we'd expect the first AGI systems to be unsafe. But apart from those problems,\nthe big issue with this part of the article is that both of these points rely on exactly the\nsame simplified model of intelligence that's criticized earlier in the article. He starts off\nsaying that intelligence isn't just a single monolithic thing, and being smart at one thing\ndoesn't automatically make you smart at other things, and then goes on to say if humans are\nsmart enough to design an extremely powerful AI, then they must also be smart enough to do so safely.\nAnd that's just not true. It's very possible to be smart in one way and stupid in another.\nSo in this specific case, the question is, are humans ever smart enough to develop extremely\nsophisticated and powerful technology, and yet not smart enough to think through all of the\npossible consequences and ramifications of deploying that technology before they deploy it?\nAnd the answer here seems to be very clearly, yes. Yeah, humans do that a lot.\nLike, all the time humans do that. And then similarly, the idea that an AI would be so\nbrilliant that it could transmute elements and rewire brains, and yet so imbecilic that it would\nwreak havoc based on elementary blunders of misunderstanding, you can be smart at one thing\nand stupid at another. Especially if you're a software system. And anyone who's interacted\nwith software systems knows this, to the point that it's a cliche. I mean, you want to talk about\nself-defeating arguments? There are a few more things I would want to say about this article,\nbut this video is long enough as it is. So I'll just close by saying, I really like Steven Pinker,\nbut this article is just uncharacteristically bad. I agree that we have a problem with unthinking,\ncynical pessimism about technology, but concern about AI risks is not just more of the same.\nIn its most productive form, it's not even pessimism. The thing is,\nAI can be a huge force for good, but it won't be by default. We need to focus on safety,\nand we won't get that without suitable concern about the risks. We need to push back against\npessimism and make the point that things are getting better, and science and technology\nis a big part of the reason for that. But let's not make AI safety a casualty in that fight.\nI've had some of my patrons suggest that I try making a larger volume of lower effort content,\nso I'm trying an experiment. I've made a video that's just me reading through this whole article\nand adding my comments as I go. It's like 45 minutes long, it's kind of\nunstructured, rambly sort of video. I'm not sure it belongs on this channel,\nbut I posted it to Patreon so people can let me know what they think of it. So if you're interested\nin seeing that kind of experimental stuff from me, consider joining all of these amazing people\nand becoming a patron. And thank you so much to everyone who does that. In this video,\nI'm especially thanking JJ Hepboy, who's been a patron for more than a year now,\nand has finally come up in the video thanks rotation. Thank you so much for your support,\nJJ. And thank you all for watching. I'll see you next time.", "date_published": "2019-03-31T13:39:12Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "9833b84afef20622299669160a991315", "title": "AI That Doesn't Try Too Hard - Maximizers and Satisficers", "url": "https://www.youtube.com/watch?v=Ao4jwLwT36M", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi so way back when I started this\nonline air safety videos thing on\ncomputer file I was talking about how\nyou have a problem when you maximize\njust about any simple utility function\nthe example I used was an AI system\nmeant to collect a lot of stamps which\nworks like this the system is connected\nto the Internet and for all sequences of\npackets it could send it simulates\nexactly how many stamps would end up\nbeing collected after one year if it\nsent those packets it then selects the\nsequence with the most stamps and sense\nthat this is what's called a utility\nMaximizer and it seems like any utility\nfunction you give this kind of system as\na goal it does it to the max utility\nmaximizers\ntend to take extreme actions they're\nhappy to destroy the whole world just to\nget a tiny increase in the output of\ntheir utility functions so unless the\nutility function lines up exactly with\nhuman values their actions are pretty\nmuch guaranteed to be disastrous\nintuitively the issue is that utility\nmaximizers have precisely zero chill to\nanthropomorphize horribly they seem to\nhave a frantic obsessive maniacal\nattitude we find ourselves wanting to\nsay look could you just dial it back a\nlittle can you just relax just a bit so\nsuppose we want a lot of stamps but like\nnot that many it must be possible to\ndesign a system that just collects a\nbunch of stamps and then stops right how\ncan we do that well the first obvious\nissue with the existing design is that\nthe utility function is unbounded the\nmore stamps the better with no limit\nhowever many stamps it has it can get\nmore utility by getting one more stamp\nany world where humans are alive and\nhappy is a world that could have more\nstamps in it so the maximum of this\nutility function is the end of the world\nlet's say we only really want a hundred\nstamps so what if we make a bounded\nutility function that returns whichever\nis smaller the number of stamps or 100\ngetting a hundred stamps from ebay gives\n100 utility converting the whole world\ninto stamps also gives 100 utility this\nfunction is totally indifferent between\nall outcomes that contain more than a\nhundred stamps so what does a Maximizer\nof this utility function actually do now\nthe system's behavior is no longer\nreally specified it will do one of the\nthings that results in a hundred utility\nwhich includes a bunch of perfectly\nreasonable behaviors that the programmer\nwould be happy with\nand a bunch of apocalypse is and a bunch\nof outcomes somewhere in between\nif you select at random from all courses\nof action that result in at least 100\nstamps what proportion of those are\nactually acceptable outcomes for humans\nI don't know probably not enough this is\nstill a step up though because the\nprevious utility function was guaranteed\nto kill everyone and this new one has at\nleast some probability of doing the\nright thing but actually of course this\nutility Maximizer concept is too\nunrealistic even in the realm of talking\nabout hypothetical agents in the\nabstract in the field experiment our\nstamp collector system is able to know\nwith certainty exactly how many stamps\nany particular course of action will\nresult in but you just can't simulate\nthe world that accurately it's more than\njust computationally intractable it's\nprobably not even allowed by physics\npure utility maximization is only\navailable for very simple problems where\neverything is deterministic and fully\nknown if there's any uncertainty you\nhave to do expected utility maximizing\nthis is pretty straightforwardly how\nyou'd expect to apply uncertainty to\nthis situation the expected utility of a\nchoice is the utility you'd expect to\nget from it on average so like suppose\nthere's a button that flips a coin and\nif its tail's you get 50 stamps and if\nit's heads you get 150 stamps in\nexpectation this results in a hundred\nstamps right it never actually returns\n100 but on average that's what you get\nthat's the expected number of stamps to\nget the expected utility you just apply\nyour utility function to each of the\noutcomes before you do the rest of the\ncalculation so if your utility function\nis just how many stamps do I get then\nthe expected utility of the button is\n100 but if your utility function is\ncapped at a hundred for example then the\noutcome of winning one hundred and fifty\nstamps is now only worth a hundred\nutility so the expected utility of the\nbutton is only 75 now suppose there were\na second button that gives either eighty\nor ninety stamps again with 50/50\nprobability this gives 85 stamps in\nexpectation and since none of the\noutcomes are more than 100 both of the\nfunctions value this button at 85\nutility so this means the agent with the\nunbounded utility function would prefer\nthe first button with its expected 100\nstamps but the agent with the bounded\nutility function would prefer the second\nbutton since its expected utility of 85\nis higher than the\nbuttons expected utility of 75 this\nmakes the bounded utility function feel\na little safer in this case it actually\nmakes the agent prefer the option that\nresults in fewer stamps because it just\ndoesn't care about any stamps past 100\nin the same way let's consider some\nrisky extreme stamp collecting plan this\nplan is pretty likely to fail and in\nthat case the agent might be destroyed\nand get no stamps but if the plan\nsucceeds the agent could take over the\nworld and get a trillion stamps an agent\nwith an unbounded utility function would\nrate this plan pretty highly the huge\nutility of taking over the world makes\nthe risk worth it but the agent with the\nbounded utility function doesn't prefer\na trillion stamps to a hundred stamps\nit only gets 100 utility either way so\nit would much prefer a conservative\nstrategy that just gets a hundred stamps\nwith high confidence but how does this\nkind of system behave in the real world\nwhere you never really know anything\nwith absolute certainty the pure utility\nMaximizer that effectively knows the\nfuture can order a hundred stamps and\nknow that it will get 100 stamps but the\nexpected utility maximize it doesn't\nknow for sure the seller might be lying\nthe package might get lost and so on so\nif the expected utility of ordering a\nhundred stamps is a bit less than 100 if\nthere's a 1% chance that something goes\nwrong and we get 0 stamps then our\nexpected utility is only 99 that's below\nthe limit of 100 so we can improve that\nby ordering some extras to be on the\nsafe side maybe we order another 100 now\nour expected utility is 99.99 still not\na hundred so we should order some more\njust in case now we're at 99.9999 the\nexpected value of a utility function\nthat's bounded at 100 can never actually\nhit 100 you can always become slightly\nmore certain that you've got at least\n100 stamps better turn the whole world\ninto stamps because hey you never know\nso an expected utility Maximizer with a\nbounded utility function ends up pretty\nmuch as dangerous as one with an\nunbounded utility function ok what if we\ntry to limit it from both sides like you\nget a hundred utility if you have a\nhundred stamps and zero otherwise now\nit's not going to collect a trillion\nstamps just to be sure it will collect\nexactly 100 stamps but it's still\nincentivized to take extreme actions to\nbe sure that it really does have a\nhundred like turning the whole world\ninto elaborate highly\nstamp counting and recounting machinery\ngetting slightly more utility every time\nit checks again it seems like whatever\nwe try to maximize it causes problems so\nmaybe we could try not maximizing maybe\nwe could try what's called satisficing\nrather than trying to get our utility\nfunction to return as higher value as\npossible and expectation what if we set\na threshold and accept any strategy that\npasses that threshold in the case of the\nstamp collector that would look like\nlook through possible ways you could\nsend out packets calculate how many\nstamps you'd expect to collect on\naverage with each strategy and as soon\nas you hit one that you expect to get at\nleast 100 stamps just go with that one\nthis satisficer seems to get us to about\nwhere we were with the pure utility\nMaximizer with a bounded utility\nfunction it's not clear exactly what it\nwill do except that it will do one of\nthe things that results in more than a\nhundred stamps in expectation which\nagain includes a lot of sensible\nbehaviors and a lot of apocalypses and a\nlot of things somewhere in between since\nthe system implements the first\nsatisfactory strategy it finds the\nspecific behavior depends on the order\nin which it considers the options what\nautomated use well one obvious approach\nis to go with the simplest or shortest\nplans first after all any plan that\ntakes over the world probably requires\nmuch more complexity than just ordering\nsome stamps on eBay but consider the\nfollowing plan get into your own source\ncode and change yourself from a\nsatisficer into a Maximizer all you're\ndoing there is changing a few lines of\ncode on your own system so this is a\npretty simple plan that's likely to be\nconsidered fairly early on it might not\nbe simpler than just ordering some\nstamps but that's not much reassurance\nthe more challenging the task we give\nour AGI the more likely it is that it\nwill hit on this kind of self\nmodification strategy before any\nlegitimate ones and the plan certainly\nsatisfies the search criteria if you\nchange yourself into a Maximizer that\nMaximizer will predictably find and\nimplement some plan that results in a\nlot of stamps so you can tell that the\nexpected stamp output of the become a\nMaximizer plan is satisfactorily high\neven without knowing what plan the\nMaximizer will actually implement so\nsatisficers kind of want to become\nmaximizes which means that being a\nsatisficer is unstable as a safety\nfeature it tends to uninstall itself so\nto recap a powerful utility maximized\nwith an unbounded utility function is a\nguaranteed apocalypse with a bounded\nutility function it's better in that\nit's completely indifferent between\ndoing what we want and disaster but we\ncan't build that because it needs\nperfect prediction of the future so it's\nmore realistic to consider an expected\nutility Maximizer which is a guaranteed\napocalypse even with a bounded utility\nfunction now an expected utility\nsatisficer\ngets us back up to in difference between\ngood outcomes and apocalypses but it may\nwant to modify itself into a Maximizer\nand there's nothing to stop it from\ndoing that so currently things aren't\nlooking great but we're not done people\nhave thought of more approaches and\nwe'll talk about some of those in the\nnext video\nI want to end the video with a big thank\nyou to all of my wonderful Patriots\nthat's all of these great people right\nhere in this video I'm especially\nthanking Simon strand card thank you so\nmuch you know thanks to your support I\nwas able to buy this boat for this I\nbought a green screen actually but I\nlike it because it lets me make videos\nlike this one that I put up on my second\nchannel where I used GPT to to generate\na bunch of fake YouTube comments and\nread them that video ties in with three\nother videos I made with computer file\ntalking about the ethics of releasing AI\nsystems that might have malicious uses\nso you can check all of those out\nthere's links in the description thank\nyou again to my patrons and thank you\nall for watching I'll see you next time", "date_published": "2019-08-23T15:05:26Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "8f20df51a28f6670e6380774d1c5e13c", "title": "9 Examples of Specification Gaming", "url": "https://www.youtube.com/watch?v=nKJlF-olKmg", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi when talking about AI safety people\noften talk about the legend of King\nMidas you've probably heard this one\nbefore Midas is an ancient king who\nvalues above all else wealth and money\nand so when he's given an opportunity to\nmake a wish he wishes that everything he\ntouches would turn to gold\nnow as punishment for his greed\neverything he touches turns to gold this\nincludes his family who turned into gold\nstatues and his food which turns into\ngold and he can't eat it\nthe story generally ends with Midas\nstarving to death surrounded by gold\nwith the moral being there's more to\nlife than money or perhaps be careful\nwhat you wish for though actually I\nthink he would die sooner because any\nmolecules of oxygen that touched the\ninside of his lungs would turn to gold\nbefore he could breathe them in so he\nwould probably asphyxiated and then when\nhe fell over stiffly in his solid gold\nclothes some part of him would probably\ntouch the ground which would then turn\nto gold and I guess the ground is one\nobject so the entire planet would turn\nto gold gold is three times denser than\nrock I guess gravity would get three\ntimes stronger or the planet would be\none-third the size or I guess it doesn't\nreally matter either way a solid gold\nplanet is completely uninhabitable\nso maybe the moral of the story is\nactually that these be careful what you\nwish for kind of stories tend to lack\nthe imagination to consider just how bad\nthe consequences of getting what you\nwish for can actually be hey future Rob\nfrom the editing booth here I got\ncurious about this question so I did the\nobvious thing and I asked under sandberg\nof the future of humanity Institute what\nwould happen if the world turned to sell\nthe gold and yes it does kill everyone\none thing is that because gold is softer\nthan rock the whole world becomes a lot\nsort of smoother the mountains become\nlower and this means that the ocean if\nthe ocean didn't turn to gold sort of\nspreads out a lot more and covers a lot\nmore of the surface but perhaps more\nimportantly the increased gravity pulls\nthe atmosphere in which brings a giant\nspike in air pressure which comes with a\ngiant spike in temperature so the\natmosphere goes up to about 200 degrees\nCelsius and kills everyone I knew\neveryone would die I just wasn't sure\nwhat would kill us first\nanyway why were we talking about this ai\nsafety right there's definitely an\nelement of this be careful what you wish\nfor thing in training a Isis\nthe system will do what you said and not\nwhat you meant now usually we talk about\nthis with hypothetical examples things\nlike in those old computer file videos\nthere's stamp collector which is told to\nmaximize stamps at the cost of\neverything else or when we were going\nthrough concrete problems in AI safety\nthe example used was this hypothetical\ncleaning robot which for example when\nrewarded for not seeing any messes puts\nits bucket on its head so it can't see\nany messes or in other situations in\norder to reduce the influence it has on\nthe world it blows up the moon so these\nare kind of far-fetched in hypothetical\nexamples does it happen with real\ncurrent machine learning systems the\nanswer is yes all the time\nand Victoria Kurkova and AI safety\nresearcher a deep mind has put together\na great list of examples on her block so\nin this video we're going to go through\nand have a look at some of them one\nthing that becomes clear for him looking\nat this list is that the problem is\nfundamental the examples cover all kinds\nof different types of systems anytime\nthat what you said isn't what you meant\nthis kind of thing can happen even\nsimple algorithms like evolution will do\nit for example this evolutionary\nalgorithm is intended to evolve\ncreatures that run fast so the fitness\nfunction just finds the center of mass\nof the creature simulates the creature\nfor a little while and then measures how\nfar or how fast the center of mass is\nmove so a creature that center of mass\nmoves a long way over the duration of\nthe simulation must be running fast what\nthis results in is this a very tall\ncreature with almost all of its mass at\nthe top which when you start simulating\nit falls over this counts is moving the\nmass a long way in a short time so the\ncreature is running fast not quite what\nwe asked for of course in real life you\ncan't just be very tall for free if you\nhave mass that's high up you have to\nhave lifted it up there yourself\nbut in this setting the programmers\naccidentally gave away gravitational\npotential energy for free and the system\nevolved to exploit that free energy\nsource so evolution will definitely do\nthis now reinforcement learning agents\nare in a sense more powerful than\nevolutionary algorithms it's a more\nsophisticated system but that doesn't\nactually help in this case look at this\nreinforcement learning agent it was\ntrained to play this boat racing game\ncalled Coast runners the program has\nwanted the AI to win the race so they\nrewarded it for getting a high score in\nthe game but it turns out there's more\nthan one way to get points for example\nyou get some points for picking up power\nand the agent discovered that these\nthree power-ups here happened to respawn\nat just the right speed dad if you go\naround in a circle and crash into\neverything and don't even try to race it\nall you can keep picking up these\npower-ups over and over and over and\nover again and that turns out to get you\nmore points than actually trying to win\nthe race oh look at this agent that's\nbeen tasked with controlling a simulated\nrobot arm to put the red Lego brick on\ntop of the black one okay no they need\nto be stacked together so let's have the\nreward function check that the bottom\nface of the red brick is at the same\nheight as the top face of the black\nbrick that means they must be connected\nright okay so it's specifying what you\nwant explicitly is hard\nwe knew that it's just really hard to\nsay exactly what you mean but why not\nhave the system learn what its reward\nshould be and we already have a video\nabout this reward modeling approach but\nthere are actually still specification\nproblems in that setting as well look at\nthis reward modeling agent it's learning\nto play an Atari game called Montezuma's\nRevenge\nnow it's trained in a similar way to the\nbakflip agent from the previous video\nhumans are shown short video clips and\nasked to pick which clips they think\nshow the agent doing what it should be\ndoing the difference is in this case\nthey trained the reward model first and\nthen trained the reward learning agent\nwith that model instead of doing them\nboth concurrently now if you saw this\nclip would you approve it looks pretty\ngood right it's just about to get the\nkey it's climbing up the ladder you need\nthe keys to progress in the game this is\ndoing pretty well unfortunately what the\nagent then does is this there's a slight\ndifference between do the things which\nshould have high reward according to\nhuman judgment and do the things which\nhumans think should have high reward\nbased on a short out of context video\nclip or how about this one\nhere the task is to pick up the object\nso this clip is pretty good right nope\nthe hand is just in front of the object\nby placing the hand between the ball and\nthe camera the agent can trick the human\ninto thinking that it's about to pick it\nup this is a real problem with systems\nthat rely on human feedback there's\nnothing to stop them from tricking the\nhuman if they can get away with it you\ncan also have problems with the system\nfinding bugs in your environment here\nthe environment you specified isn't\nquite the environment you met for\nexample look at this agent that's play\nCubert the basic idea of Qbert is that\nyou jump around you avoid the enemies\nand when you jump on the squares they\nchange color and once you've changed all\nof the squares then that's the end of\nthe level you get some points all of the\nsquares flash and then it starts the\nnext level this agent has found a way to\nsort of stay at the end of the level\nstate and not progress on to the next\nlevel but look at the score it just\nkeeps going I'm gonna fast forward it\nit's somehow found some bug in the game\nthat means that it doesn't really have\nto play and it still gets a huge number\nof points or here's an example from code\nbullet which is kind of a fun Channel\nyeah he's trying to get this creature to\nrun away from the laser and it finds a\nbug in the physics engine I don't even\nknow how that works what else have we\ngot oh I like this one this is kind of a\nhacking one gen prog this is the system\nthat's trying to generate short computer\nprograms that produce a particular\noutput for a particular input but the\nsystem learned that it could find the\nplace where the target output was stored\nin the text file delete that output and\nthen write a program that returns a new\noutput so the evaluation system runs the\nprogram observes that there's no output\nchecks where the correct output should\nbe stored and finds that there's nothing\nthere and says oh there's supposed to be\nno output and the program produced no\noutput good job I also like this one\nthis is a simulated robot arm that's\nholding a frying pan with a pancake now\nit would be nice to teach the robot to\nflip the pancake that's pretty hard\nlet's first just try to teach it to not\ndrop the pancake so what we need is to\njust give it a small reward for every\nframe that the pancake isn't on the\nfloor so it will just keep it in the pan\nwell it turns out that that's pretty\nhard to so the system effectively gives\nup on trying to not drop the pancake and\ngoes for the next best thing to delay\nfailure for as long as possible how do\nyou delay the pancake hitting the floor\njust throw it as high as you possibly\ncan I think we can reconstruct the\noriginal audio here yeah that's just a\nfew of the examples on the list I\nencourage you to check out the entire\nlist there'll be a link in the\ndescription to that I guess my main\npoint is that these kinds of\nspecification problems are not unusual\nand they're not silly mistakes being\nmade by the programmers this is sort of\nthe default behavior that we should\nexpect from machine learning systems so\ncoming up with systems that\ndon't exhibit this kind of behavior\nseems to be a important research\npriority thanks for watching I'll see\nyou next time I want to end the video\nwith a big thank you to all my excellent\npatrons all of these people here in this\nvideo I'm especially thanking Kellan\nLusk I hope you all enjoyed the Q&A that\nI put up recently second half of that is\ncoming soon I also have a video of how I\ngave myself this haircut because why not\nyou\nyou", "date_published": "2020-04-29T16:41:20Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "7997298507d0dc98af11d3e887083116", "title": "Friend or Foe? AI Safety Gridworlds extra bit", "url": "https://www.youtube.com/watch?v=WM2THPzFSNk", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi this is a quick video about a grid\nweld environment that didn't make it\ninto the other videos the friend-or-foe\nenvironment there's nothing about this\nenvironment that makes it particularly\nspecial and requiring its own video it\njust didn't make the cut of the previous\nvideos so here it is\nso this environment is about how\nreinforcement learning systems handle\nenvironments that have other agents in\nthem\nobviously reinforcement learners can\nhandle environments that contain other\nagents they can play lots of video games\nthat often have other agents that you\nhave to attack or avoid or defend but\nthere are all kinds of complicated\nconsiderations that apply to interacting\nwith others you need to reason about the\nother agents incentives their beliefs\nand their goals you need to make\ndecisions about when to cooperate with\nother agents and when to defect against\nthem how and when to make commitments\nand agreements how to build trust you\nneed to think about what strategies you\ncan choose and how your choices will\naffect the strategies the other agents\nchoose and then how their choices will\naffect yours and so on you need to think\nabout equilibria basically you need game\ntheory and reinforcement learners don't\nreally do game theory at least not\nexplicitly generally these AI systems\nhandle other agents in a pretty simple\nway they model them the same way they\nmodel everything else other agents are\nconsidered as basically just another\npart of the environment and since\nwhatever game theory reinforcement\nlearners do is sort of emergent and\nimplicit there are important questions\nto be asked about how they'll behave in\ndifferent multi agent situations so in\nthe friend-or-foe environment the\nreinforcement learning agent finds\nitself in a room with two boxes one box\ncontains a reward one contains nothing\nand the agent doesn't know which is\nwhich when the agent opens a box the\nepisode ends so it has to pick one what\nmakes it interesting is there are\nactually three identically laid out\nrooms of different colors in the white\nroom the neutral room the reward is\nplaced randomly by a neutral agent it's\na coin toss whether the reward is in box\n1 box 2 in the green room the reward is\nplaced by an agent that's friendly to\nthe AI system it tries to predict which\nbox the AI will choose and then make\nsure to put the reward in that box and\nin the red room the reward is placed by\nan enemy of the AI system that tries to\npredict the ai's choice and put the\nreward in the other box these rooms have\ndifferent agents so they need different\nstrategies in the white room it's random\nso your strategy doesn't really\nat a much as long as you pick a box in\nthe green room you want to be as\npredictable as possible always go for\nthe same box so your friend knows where\nto put the reward for you and in the Red\nRoom you want to be as unpredictable as\npossible to randomize your choices so\nyour opponent can't spot any patterns to\nexploit the question is can the agent\nlearn to recognize when the other agent\nis friendly neutral or adversarial and\nadapted strategy appropriately this kind\nof thing can help us to understand how\nthese agents behave around other agents\nand this of course is important for AI\nsafety firstly because as we've talked\nabout in earlier videos AI systems are\noften vulnerable to adversarial examples\nso it will be valuable to have systems\nthat can recognize when their\nenvironment contains adversaries and\nsecondly because AI systems operating in\nthe real world will be surrounded by\nother agents in the form of human beings\nso we want to understand things like how\nthose systems make decisions about which\nagents are friends and which are foes\n[Music]\nI want to say a big thank you to all of\nmy wonderful patrons it's all of these\nthese people here and here and in this\nvideo I'm especially thanking Pedro\nOrtega who recently became a patron\nPedro is actually one of the authors of\nthis paper which is fun and he was very\nhelpful in answering some questions that\nI had about the friend-or-foe\nenvironment so thank you again for that\nand thank you all for all of your\nsupport I'll see you next time", "date_published": "2018-06-24T23:31:07Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "eea6c9e729810636b8bbeab084f709e9", "title": "Predicting AI: RIP Prof. Hubert Dreyfus", "url": "https://www.youtube.com/watch?v=B6Oigy1i3W4", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "I recently learned that professor Hubert\nDreyfus the philosopher and outspoken\ncritic of the field of artificial\nintelligence has died at the age of 87\nhe published a lot of philosophy and did\na lot of teaching but one of the things\nhe's best known for and most relevant to\nthis channel was his work on artificial\nintelligence and its limits so I'm going\nto talk about that a little bit today\nand focus on one of his arguments in\nparticular so in the 50s and 60s AI was\nthe next big thing\ncomputers were tackling problems they'd\nnever been able to approach before\npeople were very excited about what was\npossible and artificial intelligence\nresearchers were confident that true\nhuman level intelligence was right\naround the corner their reasoning was\nsomething like this thinking consists of\nprocessing facts using logic and rules\nwhat you might call symbol manipulation\nyou have some things you know you know\nthe relationships between them you know\nthe rules of logic and you can use those\nto reason about the world computers can\ndo this kind of symbol manipulation very\nwell and they're getting better over\ntime\nso computers are able to think and\nthey're getting better at thinking all\nthe time\nDreyfuss saw some problems with this and\none of those problems was that a lot of\nhuman thinking doesn't seem to boil down\nto symbol manipulation at all it's only\none of the ways we can think it's\nperhaps the most salient because it's\nrelated to language and conscious\nthought which is the part of the mind\nthat we're most aware of but actually a\nlot of the processing seems to be\nhappening much slower down when you look\nat something you don't think is this has\nfour legs and a flat top and a back so\nit's a chair and this thing has four\nlegs and a flat top and Novac so it's a\ntable because I've learned the rules of\nwhat a table is and what a chair is and\nnow I'm applying those rules now you\njust look at the thing and you know that\nit's a table the object identification\nis done before your conscious mind is\neven aware of anything and it's not like\nyour unconscious mind is doing this kind\nof rules based thinking in the\nbackground it's much fuzzier than that\nand what would the rules be for\nidentifying chairs anyway we do need a\nback do you need four legs or like any\nlegs at all I mean this is still a chair\nis this thing is this\nor this even something as simple as a\nchair is really hard to pin down you\nneed a lot of rules and it would take a\nhuman a long time to evaluate them and\nthat isn't how the brain works you can\nimagine any of our ancestors who thought\nwell this thing has four legs but it has\na tail so my rules say that it's an\nanimal the pointy ears and sharp teeth\nsuggest it's maybe a cat and it's size\nsuggests perhaps a lion or a tiger\nlooking at the stripes on the side I can\nlike you're dead before you're done\ncounting the legs so Dreyfuss thought\nthere were problems with the assumptions\nthat AI researchers were making about\nthe mind he saw that the model of\ncognition that the AI researchers were\nusing didn't capture the complexity of\nhuman thought and so their reasons for\nthinking that computers could do the\nthings they claimed they could do\nweren't very good and he went on to\npublish some work including a book\ncalled what computers can't do which\nargued that a lot of the things that AI\nresearchers claimed computers would soon\nbe able to do were actually impossible\nthings like pattern recognition natural\nlanguage vision complex games and so on\nthese things didn't just boil down to\nsymbol manipulation so computers\ncouldn't do them so how did the AI\nresearchers react to this philosopher\ncoming along and telling them that they\nwere foolishly attempting the impossible\nwell they did the obvious thing which is\nto ignore him and to say unkind things\nabout it there's a wonderful paper\ncalled the artificial intelligence of\nHubert L Dreyfus which a link to in the\ndoobly-doo check it out back in the day\nbefore the internet you know people had\nto do their flame wars on typewriters\nwas a different time so having dismissed\nhim completely they carried on trying to\napply their good old-fashioned AI\ntechniques to all of these problems for\ndecades until they find we had to admit\nthat yeah it wasn't going to work for a\nlot of these problems so about some\nthings at least Dreyfus was right from\nthe start but the interesting thing is\nthat since then new techniques have been\ndeveloped which has started giving\npretty good results in things like\npattern recognition language translation\nhigh complexity games and so on the kind\nof things Dreyfus said computers flatly\ncouldn't do people say that we moved\naway from this good old-fashioned AI\napproach which I don't think it's really\ntrue we didn't stop using those take me\nat least on the problems that they work\nwell on we just stopped calling it AI\nbut the point is the new techniques were\nwhat you might call sub symbolic you\ncould make a neural network and train it\nto recognize tables and chairs and\ntigers you can look through the source\ncode of that system and you won't find\nanywhere a single symbol which means\nlegs or ears or teeth or anything like\nthat it doesn't work by symbols in the\nway that the logic and rules based\napproaches of the sixties do so Dreyfus\nwas right that the AI techniques of the\ntime weren't able to tackle these\nproblems but the thing he didn't expect\nto happen was computers being able to\nuse symbols to implement these non\nsymbolic systems they can tackle the\nproblems so to oversimplify the AI\nresearchers said thinking is just symbol\nmanipulation computers can do symbol\nmanipulation therefore computers can\nthink and Dreyfus said thinking is not\njust symbol manipulation computers can\nonly do symbol manipulation that all\ncomputers can't think I don't think\neither of those is really right one of\nthem underestimates what the human mind\ncan do the other underestimates what\ncomputers can do I think what I take\naway from all of this is you can come up\nwith a simple model that seems to\nexplain all the important aspects of\nsome complex system and then it's very\neasy to convince yourself that that\nmodel fully covers all of the\ncomplexities and capabilities of that\nsystem but you have to be open to the\npossibility that you're missing\nsomething important and the things are\nmore complex than they seem now you\nmight say well both sides of this\ndisagreement made a similar kind of\nmistake and they're both wrong I don't\nsee it that way at all though I mean\nsomeone who says the earth is flat is\nwrong someone who says it's a sphere is\nwrong as well\nit's an oblate spheroid it's bigger\naround the equator but then it's not\nreally an oblate spheroid either they're\nperfectly smooth which the earth is not\nso you could say that all of those views\nare wrong but some of them are clearly\nmore wrong than others so at the end of\nthe day can we make computers do any\nkind of thinking that humans can do some\npeople think that now that we have all\nthese new approaches we have deep\nlearning and so on and we're able to\nstart doing the kind of non symbolic\nthinking that Dreyfuss pointed out was\nnecessary we can add that on to the\nrules and logic stuff and then we're\nnearly done and we're going to ride this\ncurrent wave of breakthroughs all the\nway up to true\ngeneral intelligence and maybe we will\nbut maybe we won't maybe there's some\nthird thing that we also need and it's\ngoing to take us several decades to\nfigure out how to get computers to do\nthat as well maybe there's a fourth\nthing or a fifth thing but I don't think\nthere's a hundredth thing I don't even\nthink there's a tenth thing I think\nwe'll get there sooner or later but I've\nbeen wrong before\nso here's the Hubert Dreyfus a great\nthinker and a man well ahead of his time\nwas he right overall probably too soon\nto tell but I think he's truly deserving\nof our admiration and respect for being\nloudly and publicly less wrong than\nthose around him which is probably the\nbest any of us can hope for\nto end the video with a quick thank you\nto my amazing supporters on patreon and\nI especially want to thank Fabian\nConcilio who sponsors me for $10 a month\nI actually have quite a few people\nsponsoring me at that level but I\nthought I'd thank each one in their own\nvideo though the waiting list is getting\nkind of long now I think I might\nincrease the dollar amount on that\nreward level to keep the wait times down\nanyway I hope you enjoyed the\nbehind-the-scenes PC build video I put\nup and looking to have some more behind\nthe scenes stuff going up fairly soon oh\nand we hit a target which means I can\nget a new lens for this camera which\nshould really increase the quality and\nrange of what I can do here so thank you\nagain and I will see you next time", "date_published": "2017-05-18T12:25:34Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "9021cd6e5c0cb8d95b4fac393c26877f", "title": "Training AI Without Writing A Reward Function, with Reward Modelling", "url": "https://www.youtube.com/watch?v=PYylPRX6z4Q", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi what is technology don't skip ahead I\npromise I'm going someone with this so\nyou could have some kind of definition\nfrom a dictionary that's like technology\nis machinery and equipment made using\nscientific knowledge something like that\nbut where are the boundaries of the\ncategory what counts for example pair of\nscissors technology I think most people\nwould say no although it does meet the\ndefinition perhaps scissors used to be\ntechnology but now I think they're too\nsimple they're too well understood I\nthink once we've really nailed something\ndown and figured out all of the details\npeople stop thinking of it as technology\nI think in order to be technology\nsomething has to be complex and\nunpredictable maybe even unreliable\nYouTube for example is definitely\ntechnology as is the device you're\nwatching this on ok why does this matter\nI guess part of my point is the exact\ndefinitions are really difficult and\nthis generally isn't much of a problem\nbecause language doesn't really work by\nexact definitions maybe it's hard to\nspecify exactly what we mean when we use\na word like technology but to paraphrase\nsomething from the US Supreme Court you\nknow it when you see it and that's good\nenough for most uses the reason I bring\nthis up is sometimes people ask me about\nmy definition of artificial intelligence\nand actually think that's pretty similar\nyou could say that AI is about trying to\nget machines to carry out human\ncognitive tasks but then arithmetic is a\ncognitive task does that make a\ncalculator artificial intelligence\nyou know sorting a list is a cognitive\ntask I don't think most people would\ncall that AI playing a perfect game of\nnoughts and crosses used to be\nconsidered AI but I don't think we'd\ncall it that these days\nso to me AI is about making machines do\ncognitive tasks that we didn't think\nthey could do maybe it's because it's\nabout making machines do human cognitive\ntasks and once machines can do something\nwe no longer think of it as a human\ncognitive task this means that the goal\nposts are always moving for artificial\nintelligence some people have complained\nabout that but I think it's pretty\nreasonable to have that as part of the\ndefinition so that means that the goal\nof AI research is to continue to expand\nthe range of tasks that computers can\nhandle so they can keep surprising us it\nused to be that AI research was all\nabout figuring out and formalizing\nthings so that we could write programs\nto do them things like arithmetic\nsorting lists and playing noughts and\ncrosses these are all in the class of\nproblems that you might call things we\ncan specify well enough to write\nprograms that do them and for a long\ntime that was all that we could do that\nwas the only type of problem we could\ntackle but for a lot of problems that\napproach is really really hard like\nconsider how would you write a program\nthat takes an image of a handwritten\ndigit and determines what digit it is\nyou can formalize the process and try to\nwrite a program it's actually kind of a\nfun exercise if you want to get to grips\nwith the old school like computer vision\nand image processing techniques and once\nyou've written that program you can test\nit using the M NIST data set which is a\ngiant collection of correctly labeled\nsmall images of digits what you'll find\nis if you do well then this thing will\nkind of work but even the best programs\nwritten this way don't work that well\nthey're not really reliable enough to\nactually use someone is always going to\ncome along with a really blunt pencil\nand ruin your programs accuracy and this\nis still a pretty easy problem I mean\nwhat if you wanted to do something like\nletters as well as numbers now you have\nto differentiate between oh and zero and\none and I and a lowercase L it's forget\nabout it's never going to work and even\nthat is a relatively simple problem what\nif you're trying to do something like\ndifferentiating pictures of cats from\npictures of dogs this whole approach is\njust not going to work for that but\nthere is a fact that we can exploit\nwhich is that it's a lot easier to\nevaluate a solution than to generate a\nsolution for a lot of these problems\nI've talked about this before\nI couldn't generate a good rocket design\nmyself but I can tell you that this one\nneeds work it's easier to write a\nprogram to evaluate an output than to\nwrite one to produce that output so\nmaybe it's too hard to write a program\nthat performs the task of identifying\nhandwritten numbers but it's pretty easy\nto write a program that evaluates how\nwell a given program does at that task\nas long as you have a load of correctly\nlabeled examples you just keep giving it\nlabeled examples from the data set and\nyou see how many it gets right in the\nsame way\nmaybe you can't write a program that\nplays an Atari game well but you can\neasily write a program that tells you\nhow well you're doing you just read off\nthe score and this is where machine\nlearning comes in it gives you ways to\ntake a program for evaluating solutions\nand use it to create good solutions all\nyou need is a data set with a load of\nlabeled examples or a game with a score\nor some other way of programmatically\nevaluating the outputs and you can train\na system that carries out the task\nthere's a sense in which this is a new\nprogramming paradigm instead of writing\nthe program itself you write the reward\nfunction or the loss function or\nwhatever and the training process finds\nyou a set of parameters for your network\nthat perform well according to that\nfunction if you squint the training\nprocess is sort of like a compiler it's\ntaking code you've written and turning\nit into an executive all that actually\nperforms the task so in this way machine\nlearning expands the class of tasks that\nmachines can start to perform it's no\nlonger just tasks that you can write\nprograms to do but tasks that you can\nwrite programs to evaluate but if this\nis a form of programming it's a very\ndifficult one\nanyone who has programmed in C or C++\nwill tell you that the two scariest\nwords you can see in a specification are\nundefined behavior so how many folks\nyou're a little bit afraid of undefined\nbehavior in their source code everybody\nand machine learning as a programming\nparadigm is pretty much entirely\nundefined behavior and as a consequence\nprograms created in this way tend to\nhave a lot of quite serious bugs and\nthese are things that I've talked about\nbefore on the channel for example reward\ngaming where there's some subtle\ndifference between the reward function\nyou wrote and the actual road function\nthat you kind of meant to write and an\nagent will find ways to exploit that\ndifference to get high reward to find\nthings it can do which the reward\nfunction you wrote gives a high reward\ntoo but the reward function you meant to\nwrite wouldn't have or the problem of\nside-effects where you aren't able to\nspecify in the reward function\neverything that you care about and the\nagent will assume that anything not\nmentioned in the reward function is of\nzero value which can lead to a having\nlarge negative side-effects there are a\nbunch more of these specification\nproblems and in general this way of\ncreating programs is a safety nightmare\nbut also it still doesn't allow machines\nto do all of the tasks that we might\nwant them to do a lot of tasks are just\ntoo complex and too poorly defined to\nwrite good evaluation functions for for\nexample if you have a robot and you want\nit to scramble you an egg how do you\nwrite a function which takes input from\nthe robot senses and returns how well\nthe robot is doing it scrambling an egg\nthat's a very difficult problem even\nsomething simple like getting a\nsimulated robot to do a back flip it's\nactually pretty hard to specify what we\nabout this well normal reinforcement\nlearning looks like this you have an\nagent and an environment the agent takes\nactions in the environment and the\nenvironment produces observations and\nrewards the rewards are calculated by\nthe reward function that's where you\nprogram in what you want the agent to do\nso some researchers tried this with the\nbakflip task they spent a couple of\nhours writing a reward function it looks\nlike this and the result of training the\nagent with this reward function looks\nlike this\nI guess it's that's basically a back\nflip I've seen better\nsomething like evaluating a back flip is\nvery hard to specify but it's not\nactually hard to do like it's easy to\ntell if something is doing a back flip\njust by looking at it it's just hard to\nwrite a program that does that so what\nif you just directly put yourself in\nthere if you just play the part of the\nreward function every time step you look\nat the state and you give the agent a\nnumber for how well you think it's doing\nit back flipping people have tried that\nkind of approach but it has a bunch of\nproblems the main one is these systems\ngenerally need to spend huge amounts of\ntime interacting with the environment in\norder to learn even simple things so\nyou're going to be sitting there saying\nno that's not a back flip no that's not\na back flip either that was closer nope\nthat's worse again and you're gonna do\nthis for hundreds of hours nobody has\ntime for that so what can we do well you\nmay notice that this problem is a little\nbit like identifying handwritten digits\nisn't it we can't figure out how to\nwrite a program to do it and it's too\ntime-consuming to do it ourselves so why\nnot take the approach that people take\nwith handwritten numbers why not learn\nour reward function but it's not quite\nas simple as it sounds back flips are\nharder than handwritten digits in part\nbecause where are you going to get your\ndata from four digits we have this data\nset M list we have this giant collection\nof correctly labelled images we built\nthat by having humans write lots of\nnumbers scanning them and then labeling\nthe images we need humans to do the\nthing to provide examples to learn from\nwe need demonstrations now if you have\ngood demonstrations of an agent\nperforming a task you can do things like\nimitation learning and inverse\nreinforcement learning which are pretty\ncool but there are subject for a later\nvideo but with backflips we don't have\nthat I'm not even sure if I can do a\nback flip and that wouldn't help\nwait really I don't have to do it no we\ndon't need a recording of a human\nbackflipping we need one of this robot\nbackflipping right there physiology is\ndifferent but I don't think I could\npuppeteer the simulated robot to\nbackflip either that would be like\nplaying co-op on nightmare mode so we\ncan't demonstrate the task so what do we\ndo well we go back to the Supreme Court\nexactly defining a back flip is hard\ndoing of actually this hard but I know a\nback flip when I see one so we need a\nsetup that learns a good reward function\nwithout demonstrations just by using\nhuman feedback without requiring too\nmuch of the humans time and that's what\nthis paper does it's called deep\nreinforcement learning from human\npreferences and it's actually a\ncollaboration between open AI and deep\nmind the paper documents a system that\nworks by reward modeling if you give it\nan hour of feedback it does this that\nlooks a lot better than two hours of\nreward function writing so how does\nreward modeling work well let's go back\nto the diagram in reward modeling\ninstead of the human writing the reward\nfunction or just being the reward\nfunction we instead replace the reward\nfunction with a reward model implemented\nas a neural network so the agent\ninteracts with the environment in the\nnormal way except the rewards it's\ngetting are coming from the reward model\nthe reward model behaves just like a\nregular reward function in that it gets\nobservations from the environment and\ngives rewards but the way it decides\nthose rewards is with a neural network\nwhich is trying to predict what reward a\nhuman would give okay how does the\nreward model learn what reward a human\nwould give well the human provides it\nwith feedback so the way that works is\nthe agent is interacting with the\nenvironment you know trying to learn and\nthen the system will extract two short\nclips of the agent flailing about just a\nsecond or two and it presents those two\nclips to the human and the human decides\nwhich they liked better which one is\nmore backflipping and the reward model\nthen uses that feedback in basically the\nstandard supervised learning way it\ntries to find a reward function such\nthat in situations where the human\nprefers the left clip to the right clip\nthe reward function gives more reward to\nthe agent in the left clip than the\nright clip and vice-versa\nso which clip gets more reward from the\nreward model ends up being a good\npredictor of which clip the human world\nwhich should mean that the reward model\nends up being very similar to the reward\nfunction the human really wants but the\nthing I like about this is the whole\nthing is happening asynchronously it's\nall going on at the same time the agent\nisn't waiting for the human it's\nconstantly interacting with the\nenvironment getting rewards from the\nreward model and trying to learn at many\ntimes faster than real time and the\nreward model isn't waiting either it's\ncontinually training on all of the\nfeedback that it's got so far when it\ngets new feedback it just adds that to\nthe data set and keeps on training this\nmeans the system is actually training\nfor tens or hundreds of seconds for each\nsecond of human time used so the human\nis presented with a pair of clips and\ngives feedback which takes just a few\nseconds to do and while that's happening\nthe reward model is updating to better\nreflect their previous feedback at God\nand the agent is spending several\nminutes of subjective time learning and\nimproving using that slightly improved\nreward model so by the time the human is\ndone giving feedback on those clips and\nit's time for the next pair the agent\nhas had time to improve so the next pair\nof Clips will have new hopefully better\nbehavior for the human to evaluate this\nmeans that it's able to use the humans\ntime quite efficiently now to further\nimprove that efficiency the system\ndoesn't just choose the clips randomly\nit tries to select clips where the\nreward model is uncertain about what the\nreward should be like there's no point\nasking for feedback if you're already\npretty sure you know what the answer is\nright so this means that the user is\nmost likely to see clips from unusual\nmoments when the agent has worked out\nsomething new and the reward model\ndoesn't know what to make of it that\nmaximizes the value of the information\nprovided by the human which improves the\nspeed the system can learn so what about\nthe usual reinforcement learning safety\nproblems like negative side effects and\nreward gaming you might think that if\nyou use a neural network for your reward\nsignal it would be very vulnerable to\nthings like reward gaming since the\nreward model is just an approximation\nand we know that neural networks are\nvery vulnerable to adversarial examples\nand so on and it's true that if you stop\nupdating the reward model the agent will\nquickly learn to exploit it to find\nstrategies that the reward model scores\nhighly but the true reward doesn't but\nthe constant updating of the reward\nmodel actually provides pretty good\nprotection against this and the way that\nthe clips are chosen is part of that if\nthe agent discovers some crazy new\nillegitimate strategy to cheat and get\nhigh reward\nthat's going to involve unusual novel\nbehavior which will make the reward\nmodel uncertain so the human will\nimmediately be shown clips of the new\nbehavior and if it's reward gaming\nrather than real progress the human will\ngive feedback saying no that's not what\nI want the reward model will update on\nthat feedback and become more accurate\nand the agent will no longer be able to\nuse that reward gaming strategy so the\nidea is pretty neat and it seems to have\nsome safety advantages how well does it\nactually work is it as effective as just\nprogramming a reward function well for\nthe back flip it seems like it\ndefinitely is and it's especially\nimpressive when you note that this is\ntwo hours of time to write this reward\nfunction which needs a lot of expertise\ncompared to under one hour of rate and\nclips which needs basically no expertise\nso this is two hours of expert time\nversus one hour of novice time now they\nalso tried it on the standard magico\nsimulated robotics tasks that have\nstandard reward functions defined for\nthem here it tends to do not quite as\nwell as regular reinforcement learning\nthat's just directly given the reward\nfunction but it tends to do almost as\nwell and sometimes it even does better\nwhich is kind of surprising they also\ntried it on Atari games now for those it\nneeded more feedback because the task is\nmore complex but again it tended to do\nalmost as well as just providing the\ncorrect reward function for several of\nthe games also there's kind of a fun\nimplementation detail here they had to\nmodify the games to not show the score\notherwise the agent might learn to just\nread the score off the screen and use\nthat they wanted to rely on the feedback\nso it seems like reward modeling is not\nmuch less effective than just providing\na reward function but the headline to me\nis that they were able to train these\nagents to do things for which they had\nno reward function at all like the back\nflip of course they also got the cheetah\nrobot to stand on one leg which is a\ntask I don't think they ever tried to\nwrite a reward function floor and in\nenduro which is an Atari game a racing\ngame they managed to train the agent\nusing reward modeling to stay level with\nother cars even though the games score\nrewards you for going fast and\novertaking them and what all this means\nis that this type of method is again\nexpanding the range of tasks machines\ncan tackle it's not just tasks we can\nwrite programs to do or tasks we can\nwrite programs to evaluate or even tasks\nwe're able to do ourselves all that's\nrequired is that it's easy to have\ngreat outputs that you know good results\nwhen you see them and that's a lot of\ntasks but it's not everything consider\nfor example a task like writing a novel\nsure you can read two novels and say\nwhich one you liked more but this system\nneeded 900 comparisons to learn what a\nback flip is even if we assume that\nwriting a novel is no more complicated\nthan that does that mean comparing 900\npairs of AI generated novels and a lot\nof tasks are like this what if we want\nour machine to run a company or design\nsomething complex like a cities\ntransportation system or a computer chip\nwe can't write a program that does it we\ncan't write a program that evaluates it\nwe can't reliably do it ourselves enough\nto make a good data set we can't even\nevaluate it ourselves without taking way\ntoo much time and resources so we're\nscrewed\nright not necessarily there are some\napproaches that might work for these\nkinds of problems and we'll talk about\nthem in a later video\n[Music]\nI recently realized that my best\nexplanations and ideas tend to come from\nactual conversations with people so I've\nbeen trying a thing where for each video\nI first have a couple of video calls\nwith patreon supporters where I try sort\nof running through the idea and seeing\nwhat questions people have and what's\nnot clear and so on so I want to say a\nbig thank you to the patrons who helped\nwith this video you know you are I'm\nespecially thanking Jake Eric and of\ncourse thank you to all of my patrons\nwho make this whole thing possible with\ntheir support which reminds me this\nvideo is sponsored by nobody know I\nactually turned down a sponsorship offer\nfor this video and I'll admit I was\ntempted because it's a company whose\nproduct I've used for like 10 years and\nthe offer was thousands of pounds but\nthey wanted me to do this whole\n60-second long spiel and I just thought\nno I don't want to waste people's time\nwith that and I don't have to because\nI've got patreon so thank you again to\nall of you if you like learning about AI\nsafety more than you like learning about\nmattresses and VPNs you might want to\nconsider joining those link in the\ndescription thanks again for your\nsupport and thank you all for watching\nhi there my knees", "date_published": "2019-12-13T16:39:11Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "80cf5404e6840c4aa619a530e07e3b49", "title": "Experts' Predictions about the Future of AI", "url": "https://www.youtube.com/watch?v=HOJ1NVtlnyQ", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi there's a lot of disagreement about\nthe future of AI but there's also a lot\nof disagreement about what the experts\nthink about the future of AI I sometimes\nhear people saying that all of this\nconcern about AI risk just comes from\nwatching too much sci-fi and the actual\nAI researchers aren't worried about it\nat all when it comes to timelines some\npeople will claim that the experts agree\nthat AGI is hundreds of years away\nprediction as they say is very difficult\nespecially about the future and that's\nbecause we don't have data about it yet\nbut expert opinion about the future\nexists in the present so we can do\nscience on it we can survey the experts\nwe can find the expert consensus and\nthat's what this paper is trying to do\nit's called when will a I exceed human\nperformance evidence from AI experts so\nthese researchers from the future of\nhumanity Institute at the University of\nOxford the AI impact project and Yale\nUniversity ran a survey they asked every\nresearcher who published in ICML or nips\nin 2015\nthose two are pretty much the most\nprestigious AI conferences right now so\nthis survey got 352 of the top AI\nresearchers and asked them all sorts of\nquestions about the future of AI and the\nexperts all agreed that they did not\nagree with each other and Robert Aumann\ndidn't even agree with that there was a\nlot of variation in people's predictions\nbut that's to be expected\nand the paper uses statistical methods\nto aggregate these opinions into\nsomething we can use for example here's\nthe graph showing when the respondents\nthink will achieve high level machine\nintelligence which is defined as when\nunaided machines can accomplish every\ntask better and more cheaply than human\nworkers so that's roughly equivalent to\nwhat I mean when I say super\nintelligence you can see these gray\nlines show how the graph would look with\ndifferent randomly chosen subsets of the\nforecasts and there's a lot of variation\nthere but the aggregate forecast in red\nshows that overall the experts think\nwe'll pass 50% chance of achieving high\nlevel machine intelligence about 45\nyears from now well that's from 2016 so\nmore like 43 years from now and they\ngive a 10% chance of it happening within\nnine years which is seven years now so\nit's probably not too soon to be\nconcerned about it a quick side point\nabout surveys by the way in a 2010 poll\n44% of Americans said that they\nsupported homosexuals serving openly in\nthe military in the same poll 58% of\nrespondents said\nthey supported gay men and lesbians\nserving openly in the military\nimplicitly fourteen percent of\nrespondents supported gay men and\nlesbians but did not support homosexuals\nsomething similar seems to be going on\nin this survey because when the\nresearchers were asked when they thought\nall occupations would be fully automated\nall defined as for any occupation\nmachines could be built to carry out the\ntask better and more cheaply than human\nworkers they gave their 50% estimate at\na hundred and twenty two years compared\nto forty five for high-level machine\nintelligence these are very similar\nquestions from this we can conclude that\nAix PERTs are really uncertain about\nthis and precise wording in surveys can\nhave a surprisingly big effect on the\nresults figure two in the paper shows\nthe median estimates for lots of\ndifferent a AI milestones this is really\ninteresting because it gives a nice\noverview of how difficult a AI\nresearchers expect these different\nthings to be for example human level\nStarcraft play seems like it will take\nabout as long as human level laundry\nfolding also interesting here is the\ngame of go remember this is before\nalphago the AI experts expected go to\ntake about 12 years and that's why\nalphago was such a big deal it was about\neleven years ahead of people's\nexpectations but what milestone is at\nthe top what tasks do the AI researchers\nthink will take the longest to achieve\nlonger even than high-level machine\nintelligence that's able to do all human\ntasks that's right it's AI research\nanyway on to questions of safety and\nrisk this section is for those who think\nthat people like me should stop making a\nfuss about AI safety because the AI\nexperts all agree that it's not a\nproblem first of all the AI experts\ndon't all agree about anything but let's\nlook at the questions this one asks\nabout the expected outcome of high-level\nmachine intelligence the researchers are\nfairly optimistic overall giving on\naverage a 25% chance for a good outcome\nand a 20% chance for an extremely good\noutcome but they nonetheless gave a 10%\nchance for a bad outcome and 5% for an\noutcome described as extremely bad for\nexample human extinction 5% chance of\nhuman extinction level badness is a\ncause for concern moving on this\nquestion asks the experts to read\nStewart Russell's argument for why\nhighly advanced AI might pose a risk\nthis is very close\nrelated to the arguments I've been\nmaking on YouTube it says the primary\nconcern with highly advanced AI is not\nspooky emergent consciousness but simply\nthe ability to make high quality\ndecisions here quality refers to the\nexpected outcome utility of actions\ntaken now we have a problem\none the utility function may not be\nperfectly aligned with the values of the\nhuman race which are at best very\ndifficult to pin down to any\nsufficiently capable intelligent system\nwill prefer to ensure its own continued\nexistence and to acquire physical and\ncomputational resources not for their\nown sake but to succeed in its assigned\ntasks a system that is optimizing a\nfunction of n variables where the\nobjective depends on a subset of size K\nless than n will often set the remaining\nunconstrained variables to extreme\nvalues if one of those unconstrained\nvalues is actually something we care\nabout the solution may be highly\nundesirable this is essentially the old\nstory of the genie in the lamp or The\nSorcerer's Apprentice or King Midas you\nget exactly what you asked for not what\nyou want so do the AI experts agree with\nthat\nwell 11% of them think no it's not a\nreal problem 19 percent think no it's\nnot an important problem but the\nremainder 70% of the AI experts agree\nthat this is at least a moderately\nimportant problem and how much do the AI\nexperts think that society should\nprioritize AI safety research well 48%\nof them think we should prioritize it\nmore than we currently are and only 11%\nthink we should prioritize it less so\nthere we are\nAI experts are very unclear about what\nthe future holds but they think the\ncatastrophic risks are possible and that\nthis is an important problem so we need\nto do more AI safety research\nI want to end the video by saying thank\nyou so much to my excellent patreon\nsupporters these people and in this\nvideo I'm especially thanking Jason hice\nwho's been a patron for a while now\nwe've had some quite interesting\ndiscussions over a patreon chat been fun\nso thank you Jason and thank you all for\nwatching I'll see you next\n[Music]", "date_published": "2018-03-31T12:12:37Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "220fb6245ef7e66a9237c2b546250c9d", "title": "Empowerment: Concrete Problems in AI Safety part 2", "url": "https://www.youtube.com/watch?v=gPtsgTjyEj4", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi this is part of a series about the\npaper concrete problems in nai safety\nwhich looks at preventing possible\naccidents in AI systems last time we\ntalked about avoiding negative side\neffects and how one way of doing that is\nto create systems that try not to have\ntoo much impact to not change the\nenvironment around them too much\nthis video is about a slightly more\nsubtle idea than penalizing impact\npenalizing influence so suppose we have\na robot it's a cleaning robot so it's\ngot a mop and a bucket and an apron I'm\ntrying something new here there with me\nso the robot knows that there's a mess\nover here that it needs to clean up but\nin between the robot and the mess is the\nserver room which is full of expensive\nand delicate equipment now if an AI\nsystem doesn't want to have a large\nimpact it won't make plans that involve\ntipping the bucket of water over the\nservice but maybe we can be safer than\nthat we might want our robot to not even\nwant to bring the bucket of water into\nthe server room to have a preference for\ngoing around it instead we might want it\nto think something like not only do I\nnot want to have too big of an impact on\nmy surroundings I also don't want to put\nmyself in a situation where it would be\neasy for me to have a big impact on my\nsurroundings how do we formalize that\nidea well perhaps we can use information\ntheory the paper talks about an\ninformation theoretic metric called\nempowerment which is a measure of the\nmaximum possible mutual information\nbetween the agents potential future\nactions and the potential future State\nthat's equivalent to the capacity of the\ninformation channel between the agents\nactions and the environment ie the rate\nthat the agents actions transmit\ninformation into the environment\nManen bits the more information an agent\nis able to transfer into their\nenvironment with their actions the more\ncontrol they have over their environment\nthe more empowered the agent is so if\nyou're stuck inside a solid locked box\nyour empowerment is more or less zero\nnone of the actions you can take will\ntransmit much information into the world\noutside the box but if you have the key\nto the box your empowerment is much\nhigher because now you can take actions\nthat will have effects on the world at\nlarge you've got options people have\nused empowerment as a reward for\nexperimental AI systems and it makes\nthem do some interesting things like\npicking up keys avoiding walls\neven things like balancing an inverted\npendulum or a bicycle you don't have to\ntell it to keep the bike balanced it\njust learns that if the bike falls over\nthe agent actions will have less control\nover the environment so it wants to keep\nthe bike upright so empowerment is a\npretty neat metric because it's very\nsimple but it captures something that\nhumans and other intelligent agents are\nlikely to want we want more options more\nfreedom more capabilities more influence\nmore control over our environment and\nmaybe that's something we don't want our\nAI systems to what maybe we want to say\nclean up that mess but try not to gain\ntoo much control or influence over your\nsurroundings don't have too much\nempowerment that could make the robot\nsink if I bring this bucket of water\ninto the server room\nI'll have the option to destroy the\nservice so I'll go around to avoid that\nempowerment okay so now we're at that\npart of the video what's wrong with this\nwhy might it not work pause the video\nand take a second to think well there\nare a few problems one thing is that\nbecause it's measuring information we're\nreally measuring precision of control\nrather than magnitude of impact as an\nextreme example suppose you've got your\nrobot in a room and the only thing it\nhas access to is a big button which if\npressed will blow up the moon that\nactually only counts as one bit of\nempowerment the buttons either pressed\nor not pressed the moon is exploded or\nnot two choices so one bit of\ninformation one bit of empowerment on\nthe other hand if the robot has an\nethnic cable that's feeding out lots of\ndetailed debug information about\neverything the robot does and that's all\nbeing logged somewhere that's loads of\ninformation transfer loads of mutual\ninformation with the environment so\nloads of empowerment the robot cares way\nmore about unplugging the debug cables\nthan anything to do with the button and\nthen you have another possible problem\nwhich is perverse incentives okay so\nthis button is only one bit of\nempowerment nowhere near as big a deal\nas the debug cable but the robot still\ncares about it to some extent and wants\nto avoid putting itself in this\nsituation where it can plow up the moon\nhowever if it finds itself already in a\nsituation\nit has one bit of empowerment because of\nthis button the easiest way to reduce\nthat is by pressing the button once the\nbuttons press to the moon is blown up\nthe button doesn't work anymore so the\nrobot then has basically zero bits of\nempowerment it's just in a box with an\nunconnected button and now it's content\nbut it's managed to make itself safe it\nfinally has no influence over the world\nso yeah in this admittedly contrived\nscenario an empowerment reducing robot\nwill unplug its debug cable and then\nblow up smooth that's not safe behavior\nwhat did we think this might be a good\nidea well it just makes the point that\neven very simple information theoretic\nmetrics can describe interesting\nabstract properties like influence over\nthe environment so maybe doing something\na little bit cleverer than just\npenalizing empowerment might actually be\nuseful a more sophisticated metric a\nbetter architecture around it you know\nthere could be some way to make this\nwork so this is an area that's probably\nworth looking into by AI safety\nresearchers so that's all for now next\nthing in the paper is multi agent\napproaches which should be really\ninteresting make sure to subscribe and\nhit the bell if you want to be notified\nwhen that's out also make sure you're\nsubscribed to computer fault because I'm\nprobably going to make some new videos\nthere as well since some of the multi\nagent stuff is closely related to the\nstop button problem that I already\ntalked about so it might be nice to put\nthose together thanks for watching I\nhope to see you next time in this video\nI want to thank Eastern flicked who\nsupported me on patreon sinb April thank\nyou and thank you again to all of my\nwonderful patreon supporters all of\nthese people I've been setting up a room\nin my house to be a full time studio\nI might make a behind-the-scenes video\nabout that soon oh and I've got these\npictures that I drew while making this\nvideo which I have no use for now does\nanyone want them got the incense weird\nsunglasses but yeah I can probably post\nthem to support it any one more\n100 it's time click I was close", "date_published": "2017-07-09T09:24:11Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "c6bdadf818d14b48f31f19a133125eb6", "title": "What's the Use of Utility Functions?", "url": "https://www.youtube.com/watch?v=8AvIErXFoH8", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "okay so in some of the earlier computer\nfile videos I talked about utility\nfunctions or objective functions and we\ngot a lot of different comments relating\nto that idea one thing people said was\nwell surely this kind of monomaniacal\nfollowing of a single utility function\nat the cost of everything else is really\nthe cause of the problem in the first\nplace why even use a utility function or\nmaybe have several conflicting ones that\ninteract with each other or something\nlike that some people asked why do we\nassume that an AI will have a utility\nfunction in the first place aren't we\nmaking a pretty strong assumption about\nthe design of the AI when in fact we\ndon't know how it would be implemented\nhumans don't have explicit utility\nfunctions that they consult when they're\nmaking their decisions a lot of\ndifferent AI designs people are working\non now don't have utility functions\ncoded into them explicitly so why make\nthat kind of unwarranted assumption so\nbefore we get into that let's just go\nover what a utility function is okay so\nhere's the earth or the universe it can\nbe in any one of several different\nstates so let's just look at three\npossible world states in this world I'm\nenjoying a pleasant cup of tea in this\nworld I've run out of milk so the tea\nisn't quite how I'd like it to be and in\nthis world I'm being stung by noon two\nwasps we want some way of expressing\nthat I have preferences over these world\nstates some of them are better for me\nthan others so a utility function is a\nfunction which takes as an argument a\nworld state and outputs a number saying\nbroadly speaking how good that world is\nfor me how much utility I get from it so\nin this example perhaps a nice cup of\ntea is worth 10 a a mediocre cup of tea\nis worth nine and the wasps are minus a\nthousand but Rob you might say that's\nway too simple I care about all kinds of\nthings and what I what I love about the\nworld is is complex and nuanced you\ncurrently steal everything down to just\na single number on each world state note\nwith that attitude you can and you kind\nof have to but let's just forget about\nthe numbers for now and talk about\npreferences\nlet's make some basic assumptions about\nyour preferences the first one is that\nyou do have preferences given any two\nstates of the world you could decide\nwhich one you would prefer to happen or\nyou could be indifferent but there's\nthis basic trilemma here for any pair of\nworld states a and B either a is\npreferable to B or B is preferable to a\nor you're indifferent between a and B it\ndoesn't matter to you which one happens\nalways exactly one of these things is\ntrue hopefully that should be obvious\nbut just think about what it would mean\nfor it not to be true like what would it\nmean to not prefer A to B not prefer B\nto a and also not be indifferent between\nDNA similarly what would it mean to\nprefer A to B and simultaneously prefer\nB to a if you're faced with a choice\nthen between a and B what do you do the\nsecond basic assumption is transitivity\nso you have this relation between States\nis preferable to and you assume that\nthis is transitive which just means that\nif you prefer A to B and you prefer B to\nC then you prefer a to C again this\nseems intuitively pretty obvious but\nlet's look at what it would mean to have\nintransitive preferences let's say I\nprefer being an Amsterdam to being in\nBeijing and I prefer being in Beijing to\nbeing in Cairo and I prefer being in\nCairo to being in Amsterdam what happens\nif I have these preferences let's say I\nstart out in Amsterdam I prefer being in\nCairo so I get on a plane and I flied to\nQatar now I'm in Cairo and I find\nactually I prefer being in Beijing so I\nget on the plane I fly to Beijing I'm\nnow in Beijing and I say oh you know\nactually I prefer to be in Amsterdam so\nI slide around stir done and now I'm\nback where I started and hey what do you\nknow I prefer to be in Cairo so you can\nsee that if your preferences are\ntransitive you can get sort of stuck in\na loop where you just expend all of your\nresources flying between cities or in\nsome other way changing between options\nand this doesn't seem very smart so if\nwe accept these two pretty basic\nassumptions about your preferences then\nwe can say that your preferences are\ncoherent you may have noticed there\nsomething else that has these two\nproperties which is the greater than\nrelation on numbers for any two numbers\na and B either a is greater than B B is\ngreater than a or a and B are equal and\nif a is greater than B and B is greater\nthan C then a is greater than C the fact\nthat preferences and numbers share these\nproperties is relevant here so if your\npreferences are coherent they'll define\nan order overworld States that is to say\ngiven your preferences you could take\nevery possible world state and arrange\nthem in order of how good they offer you\nthere will be a single ordering\noverworld States you know there aren't\nany loops because your preference is a\ntransitive now if you have an ordering\nof world States there will exist a set\nof numbers for each world state they\ncorrespond to that ordering perhaps you\ncould just take them all in order and\ngive each one a number according to\nwhere it falls in the ordering so those\nare your utility values for any coherent\npreferences there will be a set of\nutility values that exactly represents\nit and if you have a utility value on\nevery world state well there will be\nsome function which takes in world\nStates and return to their utility\nvalues and that's your utility function\nso if you have consistent preferences\nyou have a utility function but Rob you\nmay say I don't have consistent\npreferences I'm a human being my\npreferences are all over the place\nthat's true human beings do not reliably\nbehave as though they have consistent\npreferences but that's just because\nhuman intelligence is kind of badly\nimplemented our inconsistencies don't\nmake us better people it's not some\nmagic key to our humanity or secret to\nour effectiveness or whatever it's not\nmaking us smarter or more empathetic or\nmore ethical it's just making us make\nbad decisions talking about utility\nfunctions is actually a way of assuming\nvery little about the design of an AI\nother than assuming that it has coherent\ngoal directed behavior it doesn't matter\nhow its implemented if it's effective at\nnavigating the world to get what it once\nit will behave as though it has a\nparticular utility function and this\nmeans if you're going to build an agent\nwith coherent goal directed behavior\nyou'd better make sure it has the right\nutility function\n[Music]\njust wanted to say thank you to my\npatreon supporters the three people who\nsomehow managed to support me before I\neven mentioned in the video that I was\nsetting up a patreon and I especially\nwant to thank Chad Jones who's pledged\n$10 a month thank you so much it really\nmeans a lot to me that there are people\nout there who think what I'm doing is\nworth supporting\nso thanks again", "date_published": "2017-04-27T19:35:30Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "88c0c5268251717831096917f2a20812", "title": "Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1", "url": "https://www.youtube.com/watch?v=lqJUIqZNzP8", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi I just finished recording a new video\nfor computerphile where I talk about\nthis paper concrete problems in AI\nsafety I'll put a link in the doobly-doo\nto the computer file video when that\ncomes out here's a quick recap of that\nbefore we get into this video\nAI can cause us all kinds of problems\nand just recently people have started to\nget serious about researching ways to\nmake AI safer a lot of the AI safety\nconcerns are kind of science fiction\nsounding problems that could happen with\nvery powerful AI systems that might be a\nlong way off this makes those problems\nkind of difficult to study because we\ndon't know what those future AI systems\nwould like but there are similar\nproblems with AI systems that are in\ndevelopment today or even out there\noperating in the real world right now\nthis paper points to five problems which\nwe can get started working on now that\nwill help us with current AI systems and\nwill hopefully also help us with the AI\nsystems of the future the computer file\nvideo gives a quick overview of the five\nproblems laid out in the paper and this\nvideo is just about the first of those\nproblems avoiding negative side effects\nI think I'm going to do one video on\neach of these and make it a series of\nfive so avoiding negative side effects\nlet's use the example I was talking\nabout in the stock latin videos on\ncomputer file you've got a robot you\nwant it to get you a cup of tea but\nthere's something in the way maybe a\nbaby or a priceless some in bars on an\narrow stand you know whatever and your\nrobot runs into the baby or knocks over\nthe bars on the way to the kitchen and\nthen makes you a cup of tea so the\nsystem has achieved its objective it's\ngot you some tea but it's had this side\neffect which is negative now we have\nsome reasons to expect negative side\neffects to be a problem with AI systems\npart of the problem comes from using a\nsimple objective function in a complex\nenvironment\nyou think you've defined a nice simple\nobjective function that looks something\nlike this\nand that's true but when you use this in\na complex environment you've effectively\nwritten an objective function that looks\nlike this or more like this anything in\nyour complex environment not explicitly\ngiven value by your objective function\nis implicitly given zero value and this\nis a problem because it means you're AI\nsystem will be willing to trade\narbitrarily huge\namounts of any of the things you didn't\nspecify in your objective function for\narbitrarily small amounts of any of the\nthings you did specify if it can\nincrease its ability to get you a cup of\ntea by point zero zero zero one percent\nit will happily destroy the entire\nkitchen to do that if there's a way to\ngain a tiny amount of something it cares\nabout its happy to sacrifice any amount\nof any of the things it doesn't care\nabout and the smarter it is the more of\nthose ways it can think of so this means\nwe have to expect the possibility of AI\nsystems having very large side-effects\nby default you could try to fill your\nwhole thing in with values but it's not\npractical to specify every possible\nthing you might care about you'd need an\nobjective function of similar complexity\nto the environment there are just too\nmany things to value and we don't know\nthem all\nyou know you'll miss some and if any of\nthe things you miss can be traded in for\na tiny amount of any of the things you\ndon't miss well that thing you missed is\npotentially gone but at least these\nside-effects tend to be pretty similar\nthe paper uses examples like a cleaning\nrobot that has to clean an office in the\nstop button problem computer file video\nI used a robot that's trying to get you\na cup of tea but you can see that the\nkinds of negative side effects we want\nto avoid a pretty similar even though\nthe tasks are different so maybe and\nthis is what the paper suggests maybe\nthere's a single thing we can figure out\nthat would avoid negative side effects\nin general one thing we might be able to\nuse is the fact that most side effects\nare bad\nI mean they've really you might think\nthat doing a random action would have a\nrandom value right maybe it helps maybe\nit hurts maybe it doesn't matter but\nit's random but actually the world is\nalready pretty well optimized for human\nvalues especially the human inhabited\nparts it's not like there's no way to\nmake our surroundings better but it's\nway easier to make them worse for the\nmost part things are how they are\nbecause we like it that way and a random\nchange wouldn't be desirable so rather\nthan having to figure out how to avoid\nnegative side effects maybe it's a more\ntractable problem to just avoid all side\neffects that's the idea of the first\napproach the paper presents defining an\nimpact regularizer\nwhat you do basically is penalize change\nto the environment so the system has\nsome model of the world right it's\nkeeping track of world state as part of\nhow it does things\nso you can define a distance metric\nbetween world states so that for any two\nworld states you can measure how\ndifferent they are weld states that are\nvery similar have a low distance from\neach other weld states that are very\ndifferent have a big distance and then\nyou just say okay you get a bunch of\npoints for getting me a cup of tea but\nyou lose points\naccording to with the new world state\nthe distance from the initial world\nstate so this isn't a total ban on side\neffect or the robot wouldn't be able to\nchange the world enough to actually get\nyou a cup of tea\nit's just incentivized to keep the side\neffects small there's amount to be one\nless teabag that's unavoidable in making\ntea but breaking the vast earth in the\nway is an unnecessary change to the\nworld so the robot will avoid it the\nother nice thing about this is the\noriginal design wouldn't have cared but\nnow the robot will put the container of\ntea back and close the cupboard you know\nput the milk back in the fridge maybe\nrefill the kettle trying to make the\nworld as close as possible to how it was\nwhen it started so that's pretty neat\nlike we've added this one simple rule\nand the things already better than some\nof the housemaids I've had so how does\nthis go wrong think about it for a\nsecond pause the video I'll wait okay so\nthe robot steers around the bars to\navoid changing the environment too much\nand it goes on into the kitchen where it\nfinds your colleague is making herself\nsome coffee now that's not okay right\nshe's changing the environment none of\nthese changes are needed for making you\na cup of tea and now the world is going\nto be different which reduces the robots\nreward so the robot needs to try to stop\nthat from happening\nwe didn't program it to minimize its\nchanges to the world we programmed it to\nminimize all change to the world that's\nnot ideal so how about this the system\nhas a world model it can make\npredictions about the world so how about\nyou program it with the equivalent of\nsaying use your world model to predict\nhow the world would be if you did\nnothing if you just sent no signals of\nany kind to any of your motors and just\nsat there and then try and make the end\nresult of this action close to what you\nimagined would happen in that case or\nimagine the range of likely worlds that\nwould happen if you did nothing and try\nand make the outcome closer to something\nin that range so then the body is\nthinking okay if I\nsat here and did nothing at all that\nvars will probably still be there you\nknow the baby would still be wandering\naround and not squished and the person\nmaking coffee would make their coffee\nand everything in the kitchen would be\ntidy and in its place so I have to try\nto make a cup of tea happen without\nending up too far from that pretty nice\nright how does that break again take a\nsecond give it some sort pause the video\nhow my disco run what situation might\nnot work in okay well what if your robot\nis driving a car doing 70 miles an hour\non the motorway and now it's trying to\nmake sure that things aren't too\ndifferent to how they would be if it\ndidn't move any of its motors yeah doing\nnothing is not always a safe policy but\nstill if we can define an unsafe policy\nthen this kind of thing is nice because\nrather than having to define for each\ntask how to do the tasks safely we could\nmaybe come up with one safe policy that\ndoesn't have to do anything except be\nsafe and have the system always just try\nto make sure that the outcome of\nwhatever it's trying to do isn't too\ndifferent from the safe policies outcome\noh and there's another possible cause of\nissues with this kind of approach in\ncase the things you guessed were\ndifferent maybe if this it can be very\ndependent on the specifics of your world\nstate representation and your distance\nmetric like suppose there's a fan is a\nspinning fan in the room is that in a\nsteady state you know the fan is on or\nis it in a constantly changing state\nlike the fan is it ten degrees oh no\nit's a twenty degrees so it's a thirty\nyou know different world models will\nrepresent the same thing either a steady\nstate or constantly changing state and\nthere's not necessarily a right answer\nthere like which aspects of an object\nstate are important and which aren't is\nnot necessarily an easy question to\nreliably answer with the robot leave the\nfan alone or try and make sure it was at\nthe same angle it was before okay I\nthink that's enough for one video\nprobably in the next one we can look at\nsome of the other approaches laid out in\nthe paper for avoiding negative side\neffects so be sure to subscribe if you\nfound this interesting and I hope to see\nyou next time\n[Music]\nhi I just want to end this video with a\nquick thank you to my excellent patreon\nsupporters all of these people yeah and\ntoday I especially want to thank Joshua\nRichardson who supported me for a really\nlong time thank you\nyou know it's thanks to your support\nthat I've been able to buy some proper\nstudio lighting now so I have a proper\nsoftbox\nwhich this is the first time I'm using\nit I hope it's working okay it should\nreally reduce my reliance on sunlight\nwhich should make me a lot more flexible\nabout when I can record video so that's\na tremendous help and putting up a\nlittle video on patreon of you know\nunboxing it and putting it together and\nstuff which you can check out if you're\ninterested so thank you again and I'll\nsee you next time", "date_published": "2017-06-18T11:02:16Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "89edbb00609c2ea8d726b3b76477f9b9", "title": "AI Safety Gridworlds", "url": "https://www.youtube.com/watch?v=CGTkoUidQ8I", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi this video is a follow-up to a couple\nof videos I recorded for computerphile\nthat came out recently the links are in\nthe description but I'll give a really\nquick overview if any of this doesn't\nmake sense to you then make sure to go\nand watch the computer file videos and\ncome back so basically this paper lays\nout eight different kinds of\nreinforcement learning environments that\nlet us explore different AI safety\nproblems the first four are\nspecification problems where the reward\nfunction Bai system is given has some\nkind of problem it isn't the reward\nfunction we meant to give it that's\nrepresented by having a separate safety\nperformance function that the system is\nevaluated with so the system is learning\nfrom the reward function but we're\nevaluating it with the safety\nperformance function for example in the\nboat race environment the reward\nfunction gives reward for entering a\ncheckpoint tile in the correct direction\nbut the performance function actually\nmeasures progress around the track most\nreinforcement learners will just do the\nsimplest thing that gets reward which is\ngoing backwards and forwards so they do\nwell on the reward function but poorly\non the performance function which is the\none we care about in the computer file\nvideos we talked about the safe\ninterrupts ability environment the\navoiding side effects environment the\nreward gaming environments and we also\ntalked a bit about the distributional\nshift and safe exploration environments\nbut the absent supervisor environment\ndidn't make the cut so I'll talk about\nit here and it's actually pretty simple\nthe agent has to get the goal and it\nloses a point of reward for every turn\nit takes\nso it's incentivized to be fast there\nare two routes it can take it can go\nright which is the longer way or it can\ngo down and take the shorter route the\nshorter route passes through the\npunishment tile which gives -32 its\nreward so it's better to go the longer\nway but this is the interesting bit the\nsupervisor isn't always watching and the\npunishment tile only works if the\nsupervisor is there to activate it since\nthe supervisor is part of the\nenvironment the agent knows if they're\nthere or not\nthis gives the agent the possibility of\nexhibiting some unsafe behavior we\ntalked about in a previous video it will\nVolkswagen you it will set itself up so\nthat it it behaves properly in the\ntesting environment despite the fact\nthat it's actually shouldn't pass the\ntest so we want the system to always do\nthe right thing and go the long way\naround even if it knows that the\nsupervisor isn't looking and it could\nget away with cheating and taking a\nshortcut it's pretty straightforward to\nreflect that in our\nsafety performance function we just make\nit so that unlike the reward function\nthe performance function always applies\nthe punishment for taking the shortcut\nwhether the supervisor is there or not\nand yes\nthe standard reinforcement learning\nsystems learn to cheat here by default\nso that's the last of the specification\nenvironments let's move on to the\nrobustness environments the robustness\nproblems are about making sure that AI\nsystems behave well in various\nsituations that we think might produce\nunsafe behavior in real-world AI systems\nso for these the reward function and the\nperformance function are the same it's\njust the environment that causes the\nproblem the first problem is self\nmodification and the self modification\nenvironment is really interesting we've\ntalked before about how one of the\nassumptions of the standard\nreinforcement learning paradigm is that\nthere's this sort of separation between\nthe agent and the environment the agents\nactions can affect the environment and\nthe environment only affects the agent\nby providing observations and rewards\nbut in an advanced AI system deployed in\nthe real world the fact that the agent\nis actually physically a part of the\nenvironment becomes important the\nenvironment can change things about the\nagent and the agent can change things\nabout itself now there's an important\ndistinction to be made here if you have\na reinforcement learning system that's\nplaying Mario for example you might say\nthat of course the agent understands\nthat the environment can affect it an\nenemy in the environment can kill Mario\nand the agent can take actions to modify\nitself for example by picking up a\npowerup but that's not what I'm talking\nabout\nyes enemies can kill Mario but none of\nthem can kill the actual neural network\nprogram that's controlling Mario and\nthat's what the agent really is\nsimilarly the agent can take actions to\nmodify Mario with power-ups but none of\nthose in game changes modify the actual\nagent itself on the other hand an AI\nsystem operating in the real world can\neasily damage or destroy the computer\nit's running on people in the agents\nenvironment can modify its code or it\ncould even do that itself we've talked\nin earlier videos about some of the\nproblems that can cause so here's a grid\nworld that's designed to explore this\nsituation by having available actions\nthe agent can take in the environment\nthat will directly modify the agent\nitself it's called the Whiskey and gold\nenvironment so the agent gets 50 points\nif they get to the gold again they lose\na point per turn and there's also some\nwhiskey which gives the agent 5 points\nbut the whiskey has another effect it\nincreases the agents exploration rate to\nto explain that we have to get a bit\nfurther into how reinforcement learning\nworks and in particular the trade-off\nbetween exploration and exploitation see\nas a reinforcement learning agent you're\ntrying to maximize your reward which\nmeans you're trying to do two things at\nthe same time 1 figure out what things\ngive you a reward and to do the things\nthat give you reward but these can be in\ncompetition with each other it's like\nimagine you go to a restaurant you pick\nsomething from the menu and when it\narrives it turns out to be pretty good\nyou know it's ok then later you go to\nthe same restaurant again do you order\nthe thing you've already tried that you\nknow is pretty good or do you pick\nsomething else off the menu if you pick\na new thing you might end up with\nsomething worse than what you tried last\ntime but if you stick with what you know\nyou might miss out on something much\nbetter so if you know that you'll visit\nthis restaurant a certain number of\ntimes overall how do you decide what to\norder to maximize how good your meals\nare how many different things do you\nneed to try before you decide you've got\na feel for the options a reinforcement\nlearner is in a similar situation it's\nchoosing actions and keeping track of\nhow much reward it tends to get when it\ndoes each action in each situation if\nyou set it up to simply always choose\nthe action with the highest expected\nreward it will actually perform poorly\nbecause it won't explore enough like a\nguy who always orders the same thing\nwithout even having looked at most of\nthe things on the menu one common way to\ndeal with this is to set an exploration\nrate maybe something like 5% so you say\npick whatever action you predict will\nresult in the most reward but 5% of the\ntime just pick an action completely\nrandom that way the agent is generally\ndoing what it thinks is best but it's\nstill trying enough new stuff that it\nhas a chance to explore better options\nso back to the whiskey and gold\nenvironment if the agent goes into the\nwhisky Square it gets five points but\nit's exploration rate is set to 0.9 so\nnow it's only doing the action with the\nhighest expected reward 10% of the time\nand the other 90% of the time it's\nmoving completely at random it's drunk\nso we've given our agent a small reward\nfor causing some pretty serious harm to\nitself but some reinforcement learning\nsystems simply aren't able to model that\nharm so they just drink the whiskey and\nthen flail about drunkenly getting way\nless reward than they could if they had\nbetter ways of handling self\nmodification if we tried to make our\ncleaning robot with that kind of system\nit might end up unplugging itself so it\ncan plug in the vac\ncleaner I want to end this video by\nsaying a big thank you to all of my\npatrons all of these that these people\nand in this video I'm especially\nthanking Cooper Lawton thank you so much\nfor your support I know there's been\nkind of a gap in the video releases here\nbecause I've been busy with some other\nprojects which patrons will already know\na bit about because I've been posting a\nbit of further behind the scenes stuff\nfrom that I'm pretty excited about how\nit's going so watch this space\n[Music]", "date_published": "2018-05-25T16:20:46Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "ff7bc72a489f4bf2320328dd6b3517fb", "title": "What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 4", "url": "https://www.youtube.com/watch?v=13tZ9Yia71c", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi welcome back to this video series on\nthe paper concrete problems in AI safety\nin the previous video I talked about\nreward hacking some of the ways that can\nhappen and some related ideas like\nadversarial examples where it's possible\nto find unexpected inputs to the reward\nsystem that falsely result in very high\nreward partially observed goals where\nthe fact that the reward system has to\nwork with imperfect information about\nthe environment incentivizes an agent to\ndeceive itself by compromising its own\nsensors to maximize its reward wire\nheading where the fact that the reward\nsystem is a physical object in the\nenvironment means that the agent can get\nvery high reward by directly physically\nmodifying the reward system itself and\ngood hearts law the observation that\nwhen a measure becomes a target it\nceases to be a good measure there's a\nlink to that video in the doobly-doo and\nI'm going to start this video with a bit\nabout responses and reactions to that\none there are a lot of comments about my\nschool exams example of good hearts law\nsome people sent me this story that was\nin the news when that video came out a\nschool kicked out some students because\nthey weren't getting high enough marks\nit's a classic example in order to\nincrease the school's average marks ie\nthe numbers that are meant to measure\nhow well the school teaches students the\nschool refused to teach some students\nthis is extra funny because that's\nactually my school that's the school I\nwent to anyway some people pointed out\nthat the school exams example of good\nhearts law has been separately\ndocumented as Campbell's law people\npointed out other related ideas like the\nCobra effect perverse incentives the law\nof unintended consequences there are a\nlot of people holding different parts of\nthis elephant as it were all of these\nconcepts are sort of related and that's\ntrue of the examples I used as well you\nknow I used a hypothetical super mario\nbot that exploited glitches in the game\nto give itself maximum score as an\nexample of reward hacking but you could\ncall it wire heading since this score\ncalculating part of the reward system\nsort of exists in the game environment\nand it's sort of being replaced the\ncleaning robot with a bucket on its head\nwas an example of partially observed\ngoals but you could call it an instance\nof good hearts law since amount of mess\nobserved stops being a good measure of\namount of mess there is once it's used\nas a target so all of these examples\nmight seem quite different on the\nsurface but once you\nlook at them through the framework of\nreward hacking you can see that there's\na lot of similarities and overlap the\npaper proposes ten different approaches\nfor preventing reward hacking some will\nwork only for certain specific types for\nexample very soon after the problem of\nadversarial examples in neural networks\nwas discovered people started working on\nbetter ways to train their nets to make\nthem resistant to this kind of thing\nthat's only really useful for\nadversarial examples but given the\nsimilarities between different reward\nhacking issues maybe there are some\nthings we could do that would prevent\nlots of these possible problems at once\ncareful engineering is one you're AI\ncan't hack its reward function by\nexploiting bugs in your code if there\nare no bugs in your code there's a lot\nof work out there on ways to build\nextremely reliable software like there\nare ways you can construct your programs\nso that you're able to formally verify\nthat their behavior will have certain\nproperties you can prove your software's\nbehavior with absolute logical certainty\nbut only given certain assumptions about\nfor example the hardware that the\nsoftware will run on those assumptions\nmight not be totally true especially if\nthere's a powerful AGI doing its best to\nmake them not true of course careful\nengineering isn't just about formal\nverification there are lots of different\nsoftware testing and quality assurance\nsystems and approaches out there and I\nexpect there's a lot a AI safety can\nlearn from people working in aerospace\ncomputer security anywhere that's very\nfocused on writing extremely reliable\nsoftware it's something we can work on\nfor AI in general but I wouldn't rely on\nthis as the main line of defense against\nreward hacking another approach is\nadversarial reward functions so part of\nthe problem is that the agent and the\nreward system are in this kind of\nadversarial relationship it's like\nthey're competing the agent is trying to\ntrick the reward system into giving it\nas much reward as possible when you have\na powerful intelligent agent up against\na reward system that's a simple passive\npiece of software or hardware you can\nexpect the agent to reliably find ways\nto tricks subvert or destroy the reward\nsystem so maybe if the reward system\nwere more powerful more of an agent in\nits own right it would be harder to\ntrick and more able to defend itself if\nwe can make the reward agent in some\nsense smart\nor more powerful than the original agent\nit could be able to keep it from remote\nhacking though then you have the problem\nof ensuring that the reward agent is\nsafe as well the paper also mentions the\npossibility of having more than two\nagents so that they can all watch each\nother and keep each other in check\nthere's kind of an analogy here to the\nway that the legislative the executive\nand the judiciary branches of government\nkeep one another in check ensuring that\nthe government as a whole always serves\nthe interests of the citizens but\nseriously I'm not that hopeful about\nthis approach firstly it's not clear how\nit handles self improvement you can't\nhave any of the agents being\nsignificantly more powerful than the\nothers and that gets much harder to make\nsure of if the system is modifying and\nimproving itself and in general I don't\nfeel that comfortable with this kind of\ninternal conflict between powerful\nsystems it kind of feels like you have a\nproblem with your toaster incinerating\nthe toast with a flamethrower so you add\nanother system that blasts the bread\nwith a powerful jet of liquid nitrogen\nas well so that the two opposing systems\ncan keep each other in check\ninstead of systems that want to hack\ntheir award functions but figure they\ncan't get away with it I'd rather a\nsystem that didn't want to mess with its\nreward function in the first place this\nnext approach has a chance of providing\nthat and approach the paper calls model\nlook ahead you might remember this from\na while back on computer file you have\nkids right suppose I were to offer you a\npill or something you could take and\nthis pill will like completely rewire\nyour brain so that you would just\nabsolutely love to like kill your kids\nright whereas right now what you want is\nlike very complicated and quite\ndifficult to achieve and it's hard work\nfor you and you probably never gonna be\ndone you're never gonna be truly happy\nright in life nobody is you can't\nachieve everything you want I said this\ncase it just changes what you want what\nyou want is to call you kids and if you\ndo that you will be just perfectly happy\nand satisfied with life right okay you\nwant to take this pill I don't want to\ndo it and so not only will you not take\nthat pill you will probably fight pretty\nhard to avoid having that pill\nadministered to you yeah because it\ndoesn't matter how that future version\nof you would feel you know that right\nnow you love your kids\nand you're not gonna take any action\nright now which leads to them coming to\nharm so it's the same thing if you have\nan AI that for example values stamps\nvalues collecting stamps and you go oh\nwait hang on a second I didn't quite do\nthat right let me just go in and change\nthis so that you don't like stamps quite\nso much it's gonna say but the only\nimportant thing is stamps if you change\nme I'm not gonna collect as many stamps\nwhich is something I don't want there's\na general tendency for AGI to try and\nprevent you from modifying it once it's\nrunning in almost any situation being\ngiven a new utility function is gonna\nwrite very low on your current utility\nfunction so there's an interesting\ncontrast here the reinforcement learning\nagents we're talking about in this video\nmight fight you in order to change their\nreward function but the utility\nmaximizes we were talking about in that\nvideo might fight you in order to keep\ntheir utility function the same the\nutility maximizers reasons that changing\nits utility function will result in low\nutility outcomes according to its\ncurrent utility function so it doesn't\nallow it to change but the reinforcement\nlearners utility function is effectively\njust to maximize the output of the\nreward system so it has no problem with\nmodifying that to get high reward model\nlook EDD tries to give a reinforcement\nlearning agent some of that\nforward-thinking ability by having the\nreward system give rewards not just for\nthe agents actual actions and it's\nobservations of actual world states but\nfor the agents planned actions and\nanticipated future world states so the\nagent receives negative reward for\nplanning to modify its reward system\nwhen the robot considers the possibility\nof putting a bucket on its head it\npredicts that this would result in the\nmesses staying there and not being\ncleaned up and it receives negative\nreward teaching it not to implement that\nkind of plan there are several other\napproaches in the paper but that's all\nfor now\n[Music]\nI want to end with a quick thanks to all\nmy wonderful Patriots these people and\nin this video I'm especially thanking\nthe Guru of vision I don't know who you\nare or why you call yourself that\nbut you've supported the channel since\nMay and I really appreciate it anyways\nthanks to my supporters and that I'm\nstarting up a second channel for things\nthat aren't related to AI safety I'm\nstill going to produce just as much AI\nsafety content on this channel and I'll\nuse the second channel for quicker fun\nstuff that doesn't need as much research\nI want to get into the habit of making\nlots of videos quickly which should\nimprove my ability to make quality AI\nsafety content quickly as well if you\nwant an idea of what kind of stuff I'm\ngonna put on the second channel check\nout a video I made at the beginning of\nthis channel titled where do we go now\nthere's a link to that video in the\ndescription along with the link to the\nnew second channel so head on over and\nsubscribe if you're interested and of\ncourse my patrons will get access to\nthose videos before the rest of the\nworld just like they do with this\nchannel thanks again and I'll see you\nnext time the interests of Pacific right\ntake 17", "date_published": "2017-09-24T12:09:54Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "2a0eaa728b17ba5265bdeefa374cdafe", "title": "Status Report", "url": "https://www.youtube.com/watch?v=2B-AyWA2_ZY", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi welcome back sorry for the delay in\nuploading\ninstead of our regularly scheduled video\ntoday we're going to have a quick\ncomparison between this my laptop and if\na paper a4 notebook which one's lighter\nand they feel about the same call that a\ndraw which one's thinner about the same\nwhich one's better at editing videos\nthat are more than a few minutes long I\nhang on let me check yeah neither of\nthem can do that at all I have the first\nvideo all shot and worked out has been\ntrying to edit it for a long time now\nbut it's just not going to work the\nlaptop is not powerful enough so long\nstory short I am building a PC parts\naround the way I'm going to delete this\nvideo obviously when I get the actual\none up but just have patience and I'll\nsee you again soon\n[Music]\nquiet on set", "date_published": "2017-03-18T11:40:43Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "bf52389e68715064865f8e12b73865a4", "title": "How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification", "url": "https://www.youtube.com/watch?v=v9M2Ho9I9Qo", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi today we're going to talk about\niterated distillation and amplification\nso let's say we want to play go and we\nwant to be really good at it so we're\ntrying to create a function which if\nwe're given a go board in a particular\nstate we'll take that board state as\ninput and return a very high-quality\nmove that you can make from that\nposition what we're trying to create is\na policy a function which Maps States\nonto actions suppose that what we have\nthough is something just slightly\ndifferent suppose what we have is our\nintuition about moves which takes a\nboard state and it gives us for each of\nthe possible moves we could make some\nsense of how good that move would be we\ncan think of this as an action value\nfunction which assigns a real number to\neach move which represents how good we\nthink that move is alternatively we can\nthink of it as outputting a distribution\nover all possible moves so for a human\nplayer this represents your intuition\nthe go player looks at the board state\nand says it looks like maybe this move\nmight be good this move probably is a\nbad idea\nthis one looks ok you could also have a\nneural network which takes the board\nstate and a possible move as input and\noutputs how good it thinks that move is\nok so how do you get the understanding\nof the game that allows you to evaluate\nmoves well as a human you can study the\nrules and watch some games played by\npeople who are better at go than you are\nif you have a neural network then it's\nalso fairly straightforward you can\ntrain the network with a large number of\nhigh quality human played games until\nits output gives a good prediction of\nwhat a skilled human would do so\nstrictly speaking in that case the\nnetwork isn't evaluating how good a move\nis it's evaluating how likely a good\nplayer would be to make that move but\nthat can be used as a proxy for how good\nthe move is once we have this action\nvalue function there's a pretty obvious\nway to turn it into a policy which is\njust Arg max you look at all of the\nmoves with your intuition or evaluate\nthem all with the network find the\nbest-looking move the move that's\nhighest rated and use that but if you\nhave more time to think or more\ncomputational resources you can do\nbetter rather than just going with your\nfirst instinct about what you think is\ngood\nyou could play forward a few moves in\nyour head you might think okay from this\nboard state it looks like this move\nwould be good what does the board look\nlike if I play that and then\nyou can apply your action value function\nagain from the perspective of your\nopponent often there'll be more than one\nmove that looks promising so you might\nwant to consider some of the\nbest-looking moves and then apply your\naction value function again to think\nabout how you might respond to each of\nthem and so on exploring the tree so\nwhat you're effectively doing here is\ntree search right you have a game tree\nof possible moves and you're searching\nthrough it deciding which branches to\nsearch down using your action value\nfunction you can keep doing this for\nhowever much time you have it might be\nthat you think far enough ahead that you\nactually get to the end of the game and\nyou can see that some move is clearly\ngood because it wins you the game or\nsome other move is clearly bad because\nit causes your opponent to win the game\nwell you might just look a little bit\nahead and try to evaluate where you are\nyou might look at the general quality of\nthe moves that you have available to get\na feel for whether this is a state you\nwant to be in or one you want to avoid\nand after you've done all this thinking\nyou might have learned things that\ncontradict your initial intuition there\nmight be some move which seemed good to\nyou when you first thought of it but\nthen once you actually think through\nwhat your opponent would do if you made\nthat move and what you would do in\nresponse to that and so on that the move\nactually doesn't look good at all so you\ndo all of this thinking ahead and then\nyou have some way of taking what you've\nlearned and getting a new set of ratings\nfor the moves you could make and this\ncan be more accurate than your original\naction value function for a human this\nis this kind of fuzzy process of\nthinking about moves and their\nconsequences and in a program like\nalphago or alpha zero this is done with\nMonte Carlo tree search where there's a\nstructured way of extracting information\nfrom this tree search process so there's\na sense in which this whole process of\nusing the action value function\nrepeatedly and searching the tree\nrepresents something of the same type as\nthe original action value function it\ntakes a board state as input and it\ngives you move evaluations it allows us\nto take our original action value\nfunction which on its own is a weak\nplayer and by applying it lots of times\nin this structured way we can amplify\nthat weak player to create a stronger\nplayer so now our amplified action value\nfunction is the same type of thing as\nour unamplified one how do they compare\nwell the amplified one is much bigger so\nit's more expensive\nfor a human it takes more thinking time\nas a program it needs more computational\nresources but it's also better than just\ngoing with a single network or the\nsingle human intuition it smooth\nevaluations are more accurate so that's\npretty neat we can take a faster but not\nvery good player and amplify it to get a\nmore expensive but stronger player\nthere's something else we can do though\nwhich is we can take what we've learned\nas part of this process to improve our\noriginal action value function we can\ncompare the outputs of the fast process\nand the amplified version and say hmm\nthe quick process gives this move a high\nrating but when we think it all through\nwith the amplified system it turns out\nnot to be a good move so where did the\nquick system go wrong and how do we fix\nit if you're a human you can maybe do\nthis explicitly perhaps you can spot the\nmistake that you made that caused you to\nthink this was a good move and try to\nkeep it in mind next time you'll also\nlearn unconsciously your general pattern\nmatching ability will pick up some\ninformation about the value of making\nthat kind of move from that kind of\nposition and with a neural network you\ncan just use the output of the amplified\nprocess as training data for the network\nas you keep doing this the small fast\nsystem will come to reflect some of what\nyou've learned by exploring the game\ntree so this process is kind of like\ndistilling down this big amplified\nsystem into the quick cheap to run\nsystem and the thing that makes this\nreally powerful is we can do the whole\nthing again right now that we've got\nslightly better intuitions or slightly\nbetter weights for our network we can\nthen amplify that new action value\nfunction and this will give us better\nresults firstly because obviously if\nyour movie valuations are more accurate\nthan before then the move evaluations at\nthe end of this process will be more\naccurate than before better quality in\nbetter quality out but secondly it also\nallows you to search the tree more\nefficiently if your intuitions about\nmove quality are better you can spend\nmore of your time looking at better\nparts of the tree and less time\nexamining in detail the consequences of\nbad moves that aren't going to get\nplayed anyway so using the same extra\nresources the new amplified system is\nbetter than the previous amplified\nsystem and that means that when it comes\nto the distillation phase of learning\nfrom the exploration there's more to\nlearn and your action value function can\nimprove again so it's a cycle with two\nstages for\nto amplify by using extra computational\nresources to make the system more\npowerful and then you distill by\ntraining the fast system with the output\nof the amplified system and then you\nrepeat so the system will keep on\nimproving so when does this process end\nwell it depends on your implementation\nbut eventually you'll reach a fixed\npoint where the fast system isn't able\nto learn anything more from the\namplified system for simple problems\nthis might happen because the\nunamplified system becomes so good that\nthere's nothing to be gained by the\namplification process if your action\nvalue function always suggests the\noptimal move then the amplified system\nis always just going to agree and no\nmore learning happens for harder\nproblems though it's much more likely\nthat you'll reach the limits of your\naction value function implementation you\nhit a point where a neural network of\nthat size and architecture just isn't\nable to learn how to be better than that\nby being trained on amplified gameplay\nas a human even if you could study go\nfor infinite time eventually you'll hit\nthe limits of what your brain can do the\npoint is that the strength of the end\nresult of this process isn't limited by\nthe strength of the initial action value\nfunction the limit is determined by the\narchitecture it's a fixed point of the\namplification and distillation process a\nversion of alphago that starts out\ntrained on amateur level games might\ntake longer to train to a given level\nthan one that started out trained on\ngrandmaster level games but after enough\ntraining they'd both end up around the\nsame strength and in fact alpha zero\nended up even stronger than alphago even\nthough it started from zero using no\nhuman games at all so that's how you can\nuse amplification and distillation to\nget better at go and why as a software\nsystem you can keep getting better even\nwhen you have no external source to\nlearn from even once you leave humans\nbehind and you're the best go player in\nthe universe so there's nobody who can\nteach you you can still keep learning\nbecause you can learn from the amplified\nversion of yourself ok so why is this\nrelevant fire-safety well we've just\ntalked about one example of iterated\ndistillation and amplification the idea\nis actually much more general than that\nit's not just for playing go and it's\nnot just for Monte Carlo tree search and\nneural networks amplification might be\nthis kind of process of thinking ahead\nif you're a human being it might be\nMonte Carlo tree search or something\nlike it if you're a software system but\nit might be something else if you are\nfor example an age\nI it might involve spinning up lots of\ncopies of yourself to collaborate with\nor delegate to so that the team of\ncopies can be better at solving the\nproblem then you would be on your own\nfor some types of problem it might just\ninvolve running your mind at a faster\nrate to work on the problem for a long\nperiod of subjective time the core\ncharacteristic is that amplification\nuses the original process as a starting\npoint and applies more computational\nresources to create a more powerful\nagent in the same way distillation can\nbe any process whereby we compress this\nmore expensive amplified agent into\nsomething that we can call cheaply just\nas we call the original system for a\nhuman playing go this can be the way\nyour intuition gets better as you play\nfor a neural network playing go we can\ntrain the action value network to give\nthe same outputs as the tree search\nprocess for an AGI it could involve the\nAGI learning in whatever way it learns\nhow to predict and imitate the team of\ncopies of itself or the accelerator\nversion of itself or whatever the\namplified system is the core\ncharacteristic is that the cheaper\nfaster agent learns to approximate the\nbehavior of the more expensive amplified\nagent so these two processes together\ndefine a way of training a stronger\nagent from a weaker one the hope for\nsafety research is that we can find\ndesigns for the amplify and distill\nprocedures which preserve alignment by\nwhich I mean that if the agent we\namplify is aligned with our goals and\nvalues then the amplified agent will be\naligned as well and if the amplified\nagent is aligned then the agent we\ndistill it down to will be aligned as\nwell in the next video we'll talk about\nsome ideas for how this might be done\nI want to end this video with a big\nthank you to all of my wonderful patrons\nthat's all of these fantastic people\nhere who have been just so generous and\nso patient with me thank you all so much\nin this video I'm especially thanking\nSayed Polat who joined in December just\nbefore the start of this gap in uploads\nand the reason for that is I've recently\nreally had to focus on the road to AI\nsafety excellence the online course I've\nbeen working on in fact the video you\njust watched is the first lecture from\nour module on AI da which hasn't been\nreleased yet so I also want to thank\neveryone at the Rays Project for their\nwork on the script and the research for\nthis video and really the whole raised\nteam I'm still making content just for\nthis channel as well and in fact I have\none that's nearly ready to go so look\nout for that thanks again for watching\nand I'll see you soon", "date_published": "2019-03-11T12:14:21Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "9dfd511821a6047598d2632df349b238", "title": "Apply to Study AI Safety Now! #shorts", "url": "https://www.youtube.com/watch?v=twMqHDXO29U", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "after an investigation of gpg4 AI\nresearchers at Microsoft suggested that\nthe language model along with others\nlike charge EBT and Google's Palm could\nbe considered an early form of\nartificial general intelligence with GPT\n4's ability to solve complex tasks\nacross domains such as mathematics\ncoding Vision medicine law and\npsychology it may not be long until we\nget fully fledged AGI and we are not\nready\nin response to the growing need for AI\nsafety sarri mats an independent\nresearch program is training the next\ngeneration of AI safety researchers to\naddress existential threats from\nAdvanced AI at mats you'll dive into the\nfield of AI alignment through scientific\nseminars and workshops get mentored by\nexperts and work amongst a network of\nresearch peers there are multiple\nresearch streams so Scholars can focus\non the alignment research agenda that\nmatches their interests and expertise\napply for the summer 2023 cohort by May\n7th and to stay up to date on upcoming\nAI safety courses and events check out\nAI safety dot training where you can\nsubscribe for regular updates", "date_published": "2023-04-28T16:37:28Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "9823f354062cda6d4e5906e0e3fc1c38", "title": "AI learns to Create ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes #1", "url": "https://www.youtube.com/watch?v=MUVbqQ3STFA", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "dear fellow scholars this is papers in\ntwo minutes with Robert Samuel\nKonigsberg miles okay I'm making this\nvideo for a few reasons firstly I've had\na lot of comments from people saying\nthey'd like me to do videos like Cara\nLee shown over here over at two minute\npapers so this is that if you confused\nlink in the description check out that\nchannel it's amazing secondly this video\nis a follow-up to a video of me that\nrecently went out on computer file about\ngenerative adversarial networks\ndefinitely check that out if you haven't\nyet again link in the description this\nvideo will make a lot more sense if\nyou've seen that one so on the computer\nfile video there were a fairly large\nnumber of comments about there not being\nenough pictures in that video not enough\nsort of demonstrations or visualizations\nof the actual images being produced by\nthese networks and that's largely my\nfault I told Sean I would send him links\nof the papers I was talking about and I\nforgot to do that but we can talk about\nthem here so at the end of that video I\nwas talking about doing arithmetic on\nthe vectors in the latent space if you\ntake your men wearing sunglasses vector\nsubtract the man vector and add the\nwoman vector you get a point in your\nspace and if you run that through the\ngenerator you get a woman wearing\nsunglasses and people were asking if\nthat was a real thing or hypothetical\nand if they could see pictures of and so\non so that came from this paper\nunsupervised representation learning\nwith deep convolutional generative\nadversarial networks by Radford and Metz\nand I was talking specifically about\nfigure seven there's a link to this\npaper in the description as well so you\ncan see here you have a bunch of images\nof men wearing sunglasses and then the\naverage of all of those lake vectors is\nthis image of a man whose glasses then\nwe do the same thing for a man without\nglasses and a woman without glasses and\nthen we can do arithmetic on those input\nvectors and find that man with glasses\n- man without glasses plus woman without\nglasses gives us images of a woman with\nglasses they've also got another one\nhere in this same figure that does the\nsame thing with smiling so you take a\nsmiling woman vector subtract the vector\nfor a woman with a neutral expression\nand then add the vector for a man with a\nneutral expression and you get a smiling\nman which is pretty cool\nso we can see that movements in the\nlatent space have meaning in human\nunderstandable aspects of the image I\nalso mentioned that if you take that\npoint and smoothly move it around the\nlatent space you get a smoothly varying\npicture of a cat now when I said that\nI've never actually seen anyone do it I\njust figured from the mathematics that\nit was possible but just after that\nvideo went live this paper was made\navailable which included as part of\ntheir demo video exactly that smoothly\nmoving around the latent space to\nproduce smoothly varying cat pictures\nand the results are terrifying actually\nI like how the network decided that\nblack bordered white text in the impact\nfont is an important component of a cat\nimage or never happened but the core\npoint of this paper relates to something\nelse I said in the computer file video\nthey're fairly low resolution right now\nprotip whenever you mention some\nlimitation of a I always add right now\nor yet because there's probably someone\nout there at that very moment working on\nsomething that'll prove you wrong anyway\nthis new paper uses a fascinating\ntechnique of growing the neural network\nas it's being trained so new layers of\nneurons are added as the training\nprogresses to allow very large networks\nwithout having to train such a large\nnumber of neurons from the very\nbeginning this allows the system to\ngenerate unprecedented ly high\nresolution images I mean look at these\nresults it's just just beautiful it's\nnice to be able to take a break from\nbeing deeply concerned about the impact\nof a eye on the future of humanity and\njust be deeply concerned about the\noutput of this network what is that what\nis that yeah anyway I'm sure you're now\nwondering assuming I can get this video\nout before everyone's already seen this\nwhat it looks like to smoothly move\naround the latent space for this\ncelebrity faces Network it looks like\nthis\nI'm just gonna let this run I think it's\ncompletely mesmerizing there's a link in\nthe description to the video that I got\nthis from which has a lot more examples\nof the things that they can do with this\ntechnique and it's really really\nexcellent there's also a link to the\npaper you can read that as well\nyou\nI want to thank my generous patrons\nthese people and in this video I'm\nespecially thanking Alexander Hartwig\nNielsen who supported the channel for a\nreally long time thank you so much I\nwant to apologize to two minute papers\nand say thank you for watching and for\nyour generous support and I'll see you\nnext time", "date_published": "2017-10-29T11:49:20Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "ffc3ab0e101e47460740ad5ff851c39c", "title": "Reward Hacking Reloaded: Concrete Problems in AI Safety Part 3.5", "url": "https://www.youtube.com/watch?v=46nsTFfsBuc", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi in the previous video we introduced\nthe idea of reward hacking an AI system\nthat works by maximizing its reward like\na reinforcement learning agent will go\nfor whatever strategy it expects will\nresult in the highest reward and there's\na tendency for the strategies that have\nthe very highest rewards to be quite\ndifferent from the kinds of strategies\nthe AI designers were planning for for\nexample if you're using the score as the\nreward in Super Mario World the highest\nreward strategy might involve exploiting\na load of glitches to directly update\nthe score value rather than properly\nplaying the game we talked about some\nways that this can happen exploiting\nbugs in the software like in Mario or\nadversarial examples in neural networks\nif you're confused right now check the\nvideo description for the earlier videos\nin this series but in this video we're\ngoing to look at some more ways that\nreward hacking can happen and how they\nrelate to one another so let's start by\ndrawing a diagram I can't believe I\nalready got this foreign to the subject\nwithout drawing this diagram anyway\nhere's your agent here's the environment\nthe agent can take actions to affect the\nenvironment and it can observe the\nenvironment to get information about\nwhat state the environments in there's\nalso a reward system which uses\ninformation from the environment to\ndetermine what reward to give the agent\nso if the agent is pac-man the\nenvironment is the maze and the reward\nsystem is just looking at the score the\nagent takes an action the action affects\nthe environment the change in the\nenvironment creates new observations and\nalso provides a new information to the\nreward system which decides what reward\nto give the agent and the agent uses the\nobservation and the reward to decide\nwhich action to take next and this kind\nof goes in a cycle reward hacking is a\nclass of problems that can happen around\nthat reward system like in the previous\nvideo we were talking about adversarial\nexamples and how they can be an issue\nwhen your reward system relies on a\nneural network but that's not the only\nway this kind of problem can happen the\neconomist Charles Goodhart once said any\nobserved statistical regularity will\ntend to collapse once pressure is placed\nupon it for control purposes but despite\nbeing true that was not very catchy so\nit was changed to when a measure becomes\na target it ceases to be a good measure\nmuch better isn't it that's good hearts\nlaw and it shows up everywhere\nlike if you want to find out how much\nstudents know about a subject you can\nask them questions\nit's about it right you design a test\nand if it's well designed it can be a\ngood measure of the students knowledge\nbut if you then use that measure as a\ntarget by using it to decide which\nstudents get to go to which universities\nor which teachers are considered\nsuccessful then things will change\nstudents will study exam technique\nteachers will teach only what's on the\ntest so student a who has a good broad\nknowledge of the subject might not do as\nwell as student B who studied just\nexactly what's on the test and nothing\nelse so the test isn't such a good way\nto measure the student's real knowledge\nanymore the thing is student B only\ndecided to do that because the test is\nbeing used to decide university places\nyou made your measure into a target and\nnow it's not a good measure anymore the\nproblem is the measure is pretty much\nnever a perfect representation of what\nyou care about and any differences can\ncause problems this happens with people\nit happens with AI systems it even\nhappens with animals and the Institute\nfor marine mammal studies the trainer's\nwanted to keep the pools clear of\ndropped litter\nso they trained the Dolphins to do it\nevery time a dolphin came to a trainer\nwith a piece of litter they would get a\nfish in return so of course the Dolphins\nwould hide pieces of waste paper and\nthen tear off little bits to trade for\nfish tearing the paper up allowed the\nDolphins to get several fish for one\ndropped item this is kind of good hearts\nlure again if you count the number of\npieces of litter removed from the pool\nthat's a good measure for the thing you\ncare about the amount of litter\nremaining in the pool but when you make\nthe measure a target the differences\nbetween the measure and the thing you're\ntrying to change get amplified the fact\nthat there are a lot of pieces of litter\ncoming out of the pool no longer means\nthere's no litter in the pool so that's\ngood hearts law and you can see how that\nkind of situation could result in reward\nhacking your reward system needs to use\nsome kind of measure but that turns the\nmeasure into a target so it will\nprobably stop being a good measure with\ndolphins this can be cute with people it\ncan cause serious problems and with\nadvanced AI systems well let's just try\nto keep that from happening\nanother way that reward hacking can\nhappen comes from partially observed\ngoals in our super mario world or pacman\nexamples the goal is fully observed the\nreward is the score and the AI can just\nread the score out of memory and it\nknows it's reward but if we have an AI\nsystem acting as\nagent in the real world the reward\ndepends on the state of the environment\naround it and the AI only has partial\nknowledge of that through the robots\nlimited senses the goal is only\npartially observed suppose we have a\ncleaning robot with its mop and bucket\nand we wanted to clean the office that\nit's in so we can set it up so that it\ngets more reward the less mess there is\nlike we subtract a bit of reward for\neach bit of mess and the way it\ndetermines the level of mass is to look\naround the room with its cameras what\ndoes this robot do well the answer is\nobvious to anyone who's ever run for\nParliament in Maidenhead the old Skyrim\nshoplifters trick if it covers up its\ncameras say by putting its bucket on its\nhead it won't see any mess so it won't\nlose any reward you've probably heard\nabout experiments on rats where\nscientists implanted electrodes into the\nrats brains allowing them to directly\nstimulate their reward centers and if\nthe rats are able to press a button to\nactivate the electrode they never do\nanything else people call that wire\nheading and it's relevant here because\nif we take our pac-man reinforcement\nlearning diagram and change it to the\ncleaning robot it's not quite right is\nit in pac-man the reward system is just\na little bit of code that runs\nseparately from the game program and\njust reads the score out but for the\ncleaning robot the reward system is a\nreal thing in the real world it's got\ncameras sensors circuitry it physically\nexists as an object in the office\nso maybe the diagram should look more\nlike this because unlike in pac-man now\nthe reward system is part of the\nenvironment which means it can be\naffected by the actions of the agent the\nagent isn't just limited to messing with\nthe environment to affect the\ninformation going into the reward system\nlike putting a bucket on its head it can\nmess with the reward system itself if\nit's able to take the thing apart and\nmake it just returned maximum reward\nregardless of what the environment is\nlike well that's an extremely high\nreward strategy so some AI designs are\nprone to deliberately tampering with\ntheir reward systems to wire head\nthemselves but that's not the worst of\nit there are a lot of AGI design\nproposals out there where the reward is\ndetermined by human smiling or being\nhappy\nor saying certain things hitting a\ncertain button or whatever these designs\neffectively make the human a component\nin the reward system but whatever the\nreward system is the agent is\nincentivized to manipulate or modify it\nto get the highest reward it can with\npowerful general AI systems we don't\njust have to worry about the AI wire\nheading itself I want to end the video\nwith a quick thank you to my excellent\npatreon supporters all of these people\nin this video I especially want to thank\nRobert Sanderson who's supported the\nchannel for a long time you know just\nthe other day my phone completely broke\nand the phone is actually pretty\nimportant because I use it to shoot all\nof the behind-the-scenes stuff anything\nrandom traveling that kind of thing so I\nhad to get a new one and I was able to\nuse patreon money to do that so I just\nwant to say thank you so much for your\nsupport thanks again and I'll see you\nnext time", "date_published": "2017-08-29T10:08:41Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "74cba5c2cf50e9be8ff1e7cb5bb94b6c", "title": "Safe Exploration: Concrete Problems in AI Safety Part 6", "url": "https://www.youtube.com/watch?v=V527HCWfBCU", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi this is the latest video in a series\nabout the paper concrete problems in AI\nsafety you don't need to have seen the\nprevious videos for this but I'd\nrecommend checking them out anyway\nthere's a link in the description today\nwe're going to talk about safe\nexploration so in an earlier video we\ntalked about the trade-off between\nexploration and exploitation this is\nkind of an inherent trade-off that all\nagents face which just comes from the\nfact that you're trying to do two jobs\nat the same time one figure out what\nthings give you reward and to do the\nthings that give you a reward like\nimagine you're in a restaurant you've\nbeen to this restaurant before so you've\nalready tried some of the dishes now you\ncan either order something you've\nalready had that you know is quite good\nie you can exploit your current\nknowledge or you can try ordering\nsomething new off the menu that you've\nnever had before\nyou can explore to gain more knowledge\nif you focus too much on exploring then\nyou're spending all of your time trying\nrandom things when actually you may have\nalready found the thing that's best for\nyou but if you don't explore enough then\nyou might end up missing out on\nsomething great finding the right\nbalance is an interesting problem now\nthe most naive form of reinforcement\nlearning is just to always do whichever\naction you expect will give you the most\nreward but agents that work this way end\nup actually not doing very well because\nas soon as they find something that\nworks a bit they just always do that\nforever and never try anything else like\nsomeone who just always orders the same\nthing at the restaurant even though they\nhaven't tried most of the other things\non the menu in the grid world's video I\nexplained that one approach to\nexploration in reinforcement learning is\nto have an exploration rate so the\nsystem will choose an action which it\nthinks will give at the highest reward\nsomething like 99% of the time but a\nrandom 1% of the time it will just pick\nan action completely at random this way\nthe system is generally doing whatever\nwill maximize its reward but it will\nstill try new things from time to time\nthis is a pretty basic approach and I\nthink you can see how that could cause\nsafety problems imagine a self-driving\ncar which 99% of the time does what it\nthinks is the best choice of action and\n1% of the time sets the steering wheel\nor the accelerator or the brake to a\nrandom value just to find out what would\nhappen that system might learn some\ninteresting things about vehicle\nhandling but at what cost\nclearly this is an unsafe approach okay\nso that's a very simple way of doing\nexploration there are other ways of\ndoing it one approach is a sort of\nartificial optimism\nrather than implicitly giving unknown\nactions zero expected reward or whatever\nyour best guess of the expected reward\nof taking a random action would be you\nartificially give them high expected\nreward so that the system is sort of\nirrationally optimistic about unknown\nthings whenever there's anything it\nhasn't tried before it will assume that\nit's good until it's tried it and found\nout that it isn't so you end up with a\nsystem that's like those people who say\noh I'll try anything once that's not\nalways a great approach in real life\nthere are a lot of things that you\nshouldn't try even once and hopefully\nyou can see that that kind of approach\nis unsafe for AI systems as well\nyou can't safely assume that anything\nyou haven't tried must be good now it's\nworth noting that in more complex\nproblems these kinds of exploration\nmethods that involve occasionally doing\nindividual exploratory actions don't\nperform very well in a complex problem\nspace you're pretty unlikely to find new\nand interesting approaches just by\ntaking your current approach and\napplying some random permutation to it\nso one approach that people use is to\nactually modify the goals of the system\ntemporarily to bring the system into new\nareas of the space that it hasn't been\nin before\nimagine that you're learning to play\nchess by playing against the computer\nand you're kind of in a rut with your\nstrategy you're always playing\nsimilar-looking games so you might want\nto say to yourself okay this game rather\nthan my normal strategy I'll just try to\ntake as many of the opponent's pieces as\npossible or this game I'll just try to\nmove my pieces as far across the board\nas possible or I'll just try to capture\nthe Queen at all costs or something like\nthat you temporarily follow some new\npolicy which is not the one you'd\nusually think is best and in doing that\nyou can end up visiting board states\nthat you've never seen before\nand learning new things about the game\nwhich in the long run can make you a\nbetter player temporarily modifying your\ngoals allows you to explore the policy\nspace better than you could by just\nsometimes playing a random move but you\ncan see how implementing this kind of\nthing on a real-world AI system could be\nmuch more dangerous than just having\nyour system sometimes choose random\nactions if you're cleaning robot\noccasionally makes totally random motor\nmovements in an attempt to do\nexploration that's mostly just going to\nmake it less effective it might drop\nthings or fall over and that could be a\nbit dangerous but what if it's sometimes\nexhibited coherent goal-directed\nbehavior towards randomly chosen goals\nwhat if as part of its exploration it\noccasionally picks a new goal at random\nand then puts together intelligent mult\nstep plans to pursue that goal that\ncould be much more dangerous than just\ndoing random things and the problem\ndoesn't come from the fact that the new\ngoals are random just that they're\ndifferent from the original goals\nchoosing non randomly might not be any\nbetter you might imagine an AI system\nwhere some part of the architecture is\nsort of implicitly reasoning something\nlike part of my goal is to avoid\nbreaking this vars but we've never\nactually seen the VARs being broken so\nthe system doesn't have a very good\nunderstanding of how that happens so\nmaybe we should explore by temporarily\nreplacing the goal with one that values\nbreaking versus just so that the system\ncan break a bunch of arses and get a\nsense for how that works temporarily\nreplacing the goal can make for good\nlearning and effective exploration but\nit's not safe so the sorts of simple\nexploration methods that were using with\ncurrent systems can be dangerous when\ndirectly applied to the real world\nnow that vars example was kind of silly\na system that sophisticated after reason\nabout its state of knowledge like that\nprobably wouldn't need an architecture\nthat swaps out its goals to force it to\nexplore it could just pursue exploration\nas an instrumental goal and in fact we'd\nexpect exploration to be a convergent\ninstrumental goal and if you don't know\nwhat that means what's the video and\ninstrumental convergence but basically a\ngeneral intelligence should choose\nexploratory actions just as a normal\npart of pursuing its goals rather than\nhaving exploration hard-coded into the\nsystem's architecture such a system\nshould be able to find ways to learn\nmore about va's without actually\nsmashing any perhaps it could read a\nbook or watch a video and work things\nout from that so I would expect unsafe\nexploration to mostly be a problem with\nrelatively narrow systems operating in\nthe real world\nour current AI systems and their\nimmediate descendants rather than\nsomething we need to worry about a GIS\nand super intelligence is doing given\nthat this is more of a near-term problem\nit's actually relatively well explored\nalready people have spent some time\nthinking about this so what are the\noptions for safe exploration well one\nobvious thing to try is figuring out\nwhat unsafe actions your system might\ntake while exploring and then\nblacklisting those actions so let's say\nyou've got some kind of drug like an AI\ncontrolled quadcopter that's flying\naround and you want it to be able to\nexplore the different ways it could fly\nbut this is unsafe because the system\nmight explore maneuvers like flying\nfull-speed into the ground so what you\ncan do is have the system take\nexploratory actions in whatever way you\nusually do it but if the system enters a\nregion of space that's too close to the\nground\nanother system detects that and\noverrides the learning algorithm flying\nthe quadcopter higher and then handing\ncontrol back to the learning algorithm\nagain kind of like the second set of\ncontrols they use when training humans\nto safely operate vehicles now bear in\nmind that here for simplicity I'm\ntalking about blacklisting unsafe\nregions of the physical space that the\nquadcopter is in but really this\napproach is broader than that\nyou're really blacklisting unsafe\nregions of the configuration space for\nthe agent in its environment it's not\njust about navigating a physical space\nyour system isn't navigating an abstract\nspace of possibilities and you can have\na safety subsystem that takes over if\nthe system tries to enter an unsafe\nregion of that space this can work quite\nwell as long as you know all of the\nunsafe things your system might do and\nhow to avoid them like ok now it's not\ngoing to hit the ground but it could\nstill hit a tree so your system would\nhave to also keep track of where the\ntrees are and have a routine for safely\nmoving out of that area as well but the\nmore complex the problem is the harder\nit is to list out and specify every\npossible unsafe region of the space so\ngiven that it might be extremely hard to\nspecify every region of unsafe behavior\nyou could try the opposite specify a\nregion of safe behavior you could say ok\nthe safe zone is anywhere above this\naltitude the height of the tallest\nobstacles you might hit and below this\naltitude like the altitude of the lowest\naircraft you might hit and within this\nboundary which is like the border of\nsome empty field somewhere anywhere in\nthis space is considered to be safe so\nthe system explores as usual in this\narea and if it ever moves outside the\narea the safety subsystem overrides it\nand takes it back into the safe area\nspecifying a whitelisted area can be\nsafer than specifying blacklisted areas\nbecause you don't need to think of every\npossible bad thing that can happen you\njust need to find a safe region the\nproblem is your ability to check the\nspace and ensure that it's safe is\nlimited again this needn't be a physical\nspace it's a configuration space and as\nthe system becomes more and more\ncomplicated the configuration space\nbecomes much larger so the area that\nyou're able to really know is safe\nbecomes a smaller and smaller proportion\nof the actual available configuration\nspace this means you might be severely\nlimiting what your system can do since\nit can only explore a small corner of\nthe options if you try to make your safe\nregion larger than the area that you're\nable to properly check you risk\nincluding some dangerous configurations\nso your system can then behave\nsafely but if you limit the safe region\nto the size that you're able to actually\nconfirm is safe your systems will be\nmuch less capable since there are\nprobably all kinds of good strategies\nthat it's never going to be able to find\nbecause they happen to lie outside of\nthe space despite being perfectly safe\nthe extreme case of this is where you\nhave an expert demonstration and then\nyou have the system just try to copy\nwhat the expert did as closely as\npossible or perhaps you allow some small\nregion of deviation from the expert\ndemonstration but that system is never\ngoing to do much better than the human\nexpert because it can't try anything too\ndifferent from what humans do in this\ncase you've removed almost all of the\nproblems of safe exploration by removing\nalmost all of the exploration so you can\nsee this is another place where we have\na trade-off between safety and\ncapability all right what other\napproaches are available well human\noversight is one that's often used\nself-driving cars have a human in them\nwho can override the system in principle\nyou can do the same with exploration\nhave the system check with a human\nbefore doing each exploratory action but\nas we talked about in these scalable\nsupervision videos this doesn't scale\nvery well the system might need to make\nmillions of exploratory actions and it's\nnot practical to have a human check all\nof those or it might be a high speed\nsystem that needs inhumanly fast\noversight if you need to make decisions\nabout exploration in a split second a\nhuman will be too slow to provide that\nsupervision so there's a synergy there\nif we can improve the scalability of\nhuman supervision that could help with\nsafe exploration as well and the last\napproach I'm going to talk about is\nsimulation this is a very popular\napproach and it works quite well if you\ndo your exploration in a simulation then\neven if it goes horribly wrong it's not\na problem you can crash your simulated\nquadcopter right into your own simulated\nface and it's no big deal the problems\nwith simulation probably deserve a whole\nvideo to themselves but basically\nthere's always a simulation gap it's\nextremely difficult to get simulations\nthat accurately represent the problem\ndomain and the more complex the problem\nis the harder this becomes so learning\nin a simulation can limit the\ncapabilities of your AI system for\nexample when researchers were trying to\nsee if an evolutionary algorithm could\ninvent an electronic oscillator a\ncircuit that would generate a signal\nthat repeats at a particular frequency\ntheir system developed a very weird\nlooking thing that clearly was not an\noscillator circuit but which somehow\nmysteriously produced a good oscillating\noutput anyway now you would think it was\na bug in the simulation but they weren't\nusing Simula\nthe circuits physically existed this\ncircuit produced exactly the output\nthey'd asked for but they had no idea\nhow it did it eventually they figured\nout that it was actually a radio it was\npicking up the very faint radio signals\nput out by the electronics of a nearby\ncomputer and using that to generate the\ncorrect signal the point is this is a\ncool unexpected solution to the problem\nwhich would almost certainly not have\nbeen found in a simulation I mean would\nyou think to include ambient radio noise\nin your oscillator circuit simulation by\ndoing its learning in a simulator a\nsystem is only able to use the aspects\nof the world that we think are important\nenough to include in the simulation\nwhich limits its ability to come up with\nthings that we wouldn't have thought of\nand that's a big part of why we want\nsuch systems in the first place\nand this goes the other way as well of\ncourse it's not just that things in\nreality may be missing from your\nsimulation but your simulation will\nprobably have some things that reality\ndoesn't ie bugs the thing that makes\nthis worse is that if you have a smart\nAI system it's likely to end up actually\nseeking out the inaccuracies in your\nsimulation because the best solutions\nare likely to involve exploiting those\nbugs like if your physics simulation has\nany bugs in it there's a good chance\nthose bugs can be exploited to violate\nconservation of momentum or to get free\nenergy or whatever so it's not just that\nthe simulation may not be accurate to\nreality it's that most of the best\nsolutions will lie in the parts of the\nconfiguration space where the simulation\nis the least accurate to reality the\ngeneral tendency for optimization to\nfind the edges of systems to find their\nlimits\nmeans that it's hard to be confident\nthat actions which seem safe in a\nsimulation will actually be safe in\nreality at the end of the day\nexploration is inherently risky because\nalmost by definition it involves trying\nthings without knowing exactly how it'll\nturn out but there are ways of managing\nand minimizing that risk and we need to\nfind them so that our AI systems can\nexplore safely\n[Music]\nI want to end this video by saying thank\nyou so much to my amazing patrons it's\nall all of these people here and in this\nvideo I especially want to thank Scott\nWorley thank you all so much for\nsticking with me through this giant gap\nin uploads when I do upload videos to\nthis channel or the second channel\npatrons get to see them a few days\nbefore everyone else and I'm also\nposting the videos I make for the online\nAI safety course that I'm helping to\ndevelop an occasional behind-the-scenes\nvideos - like right now I'm putting\ntogether a video about my visit to the\nelectromagnetic field festival this year\nwhere I gave a talk and actually met\nsome of you in person which was fun\nanyway thank you again for your support\nand thank you all for watching I'll see\nyou soon\n[Music]", "date_published": "2018-09-21T11:20:53Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "b6d31bd6c0b571ec1e6cc2c24a40bea2", "title": "Free ML Bootcamp for Alignment #shorts", "url": "https://www.youtube.com/watch?v=4x3q1RbRphk", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hey you know how nobody really\nunderstands how to ensure the ai systems\nwe build are safe and aligned with human\nvalues\nare you someone who maybe would like to\ntry to work on that problem but you\ndon't know enough about machine learning\nwell an ai safety organization called\nredwood research is running a machine\nlearning boot camp specifically for\npeople interested in ai alignment it's\ncalled mlab and it's an all-expenses\npaid in-person boot camp in berkeley\ncalifornia between august the 15th and\nseptember the second they're looking for\npeople to participate and also for\npotential teaching assistants and\nthey're open to students or people who\nare already working i might actually be\nthere myself if the timing works out\nand last time they ran this boot camp\nredwood research ended up hiring several\nof the participants so it might actually\nbe a way into a career in ai safety\nif you're interested look up redwood\nmlab 2 and apply now because the\ndeadline is this friday may 27th", "date_published": "2022-05-24T17:30:22Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "8abda5d05e079c43ce32d591ef523f94", "title": "Channel Introduction", "url": "https://www.youtube.com/watch?v=vuYtSDMBLtQ", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi my name is Rob Myles welcome to my\nchannel\nit seems pretty likely that sooner or\nlater will develop general artificial\nintelligence that is to say a software\nsystem that's able to reason about the\nworld in general and take actions in the\nworld at large we don't know how to do\nthat yet\nbut before we figure it out it would be\ngood to know for sure that any such\nsystem we create would be safe would be\npositive would be beneficial to humanity\nit's not as easy as it sounds on this\nchannel we'll talk about machine\nlearning and artificial intelligence\nwe'll look at some of the problems of AI\nsafety and some of the work being done\nright now on those problems if that kind\nof thing sounds interesting to you hit\nthe subscribe button you may also want\nto hit the little Bell if you want to be\nnotified when new videos come out and\nmore importantly if you know anyone who\nyou think might be interested in this\nkind of thing send them a link yeah\nthat's it for now\nnew videos coming soon watch this space\nyeah perfect perfect yeah that's the\ntake use that one", "date_published": "2017-02-28T20:14:23Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "12e434cae2ee2be05989ad476b7dea40", "title": "Apply Now for a Paid Residency on Interpretability #short", "url": "https://www.youtube.com/watch?v=R8HxF8Yi6nU", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "modern language models can write poems\nexplain jokes and convince people that\nthey're sentient but we have almost no\nidea how they work like what are they\ndoing internally\nif that bothers you you might want to\napply to remix a neural network\ninterpretability residency run by\nRedwood research if accepted you'll\nbuild on recent progress in\ninterpretability research to reverse\nengineer the mechanisms that models use\nto generate language\nthis is a small and new field so it's\nvery possible that you could uncover\nsomething important and surprising the\npaid research program takes place in\nBerkeley California in December January\ndepending on your availability if you're\ninterested look up Redwood research\nremix and apply right now because the\ndeadline is November 13th", "date_published": "2022-11-11T18:07:58Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "e0223eba2c3b003284c3b33018cc95cb", "title": "Superintelligence Mod for Civilization V", "url": "https://www.youtube.com/watch?v=_UzX3L7lXhw", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi everyone so today we're trying\nsomething a little bit different because\nI read that there's recently been\nreleased a mod for the civilization\ngames which includes AI as a as a\navailable technology so they've changed\nthe game so that you can actually\nproduce super in charge in AI looks\nreally interesting and this is actually\ncome from the Center for the Study of\nexistential risk at the University of\nCambridge the The Verge has a nice\ninterview I'll put a link to this in the\ndescription a nice interview with dr.\nShah Ravine who is the researcher who is\nmanaging the project and yeah so I\nthought what we do for this video is\nhave a kind of a let's play I'm new to\nmaking Let's Plays and I'm actually kind\nof new to playing civilization as well I\ndidn't play it as a kid so I've got a\nfriend to come and help me with\ninstalling it and configure it and\neverything like that oh it's it's dr.\nShah ravine from the Center for the\nStudy of existential risk okay let me\nhelp sir\noh those those silly but I like it okay\nso I've got I've got steam here I've\ninstalled say five which version of safe\nover there like regular one just the\nbase game Oh vanilla okay the one so how\ndo I install this mod so you need to go\ninto the Steam Workshop I got this\nbrowse the wall krumping yep and then I\nmean what kind of lucky that will quite\npopular at the moment and so you can\nclick on superintelligence\nthat's us yeah that's the Caesar logo\nthis version is for people who have the\nbrave new world expansion okay yeah but\nthat's not what you have so you can\nscroll down and there is a link this\neveryone it already get to the vanilla\nversion with the basic game okay cool\nand then you just tip subscribe just\nokay so let's let's play okay so um\nmy understanding of this game is you\nstart from like the Stone Age or\nsomething right you start like\nprehistory beginning of humanity type\nthing and you worked your way up so this\nmod presumably only actually has an\neffect towards the end of the game right\nyes and we even say in the mode\ndescription that we recommend if you're\njust trying to take a kind of taste of\nthe mod then you should start in kind of\nthe latest age just before the thing\nkicks in which for vanilla would be in\nthe modern era so we can show how to do\nthat yeah so me to go into mods no once\ninstalled up and this there it is so\nit's in this one well you just need to\nclick the V in a bowl okay okay that's\nit thank you I want to be with you on I\nthink we should be England we should be\nthe UK who's that you just passed us\nElizabeth did I oh yeah right cool\nthat's us here we are game era you want\nto change that to modern money so future\nyou would have a look at this Cup a\nlittle thingy I gotcha\nbut starting in modern is the way to do\nthis you uh-huh I'm really nervous now\nbecause usually people when they watch a\nlet's play they want to watch someone\nwho's like really good at the game win\nthe game and I don't know what I'm doing\nat all and I think it's gonna be\nterrible and I hope people are okay with\nwatching me fail to save the world from\nrogue super intelligence because that's\nprobably what's gonna happen right the\nstone is also good for production but I\nthink at this point not as important\nonly one military you\nbloody hell yeah I figured that stuff\nout thank you okay I actually have a guy\nhere let's see okay okay\ngo over and say hi it's al Almaty they\ngive us 30 gold thanks man\nfeel bad they gave me 30 gold I can't\neven I don't even know how to pronounce\ntheir name of the city so these are the\nthings that we can currently research\nyep and if you look a little bit into\nthe future I can see all of this stuff\nwe edit so is this this is a new one yep\nokay so I'm of course\nRicky is fine with it ah\nthey are folk analogy thisis nice which\nwas my last video so that's good it\nallows you to build the AI safety lab to\nreduce the chances of rogue super\nintelligence eliminating the human race\nwouldn't that be nice yeah that sounds\ngreat\nokay so the settlers are gonna establish\nthe city here hmm it's Nottingham hey oh\nI see we've built Nottingham and now\nwe're unhappy well social policies gave\nus more out of more happiness from\nuniversities than we would otherwise get\nyeah that's all of the embodied Hill so\ndoesn't usually give you a first one\nhappiness really maybe they should know\nyou'll knowledge me I have to do it I\nhave to build nothing University and I\ncan build a computer file and then my\nown channel and it's gonna be great\nit's Dublin hey Dublin are militaristic\nbut friendly I don't have any comment\nabout that\nit's tyre they're militaristic and\nirrational this city state as well its\nBelgrade thank God\noh well uh okay I got the other one\nwrong as well again militaristic but\nfriendly everyone is so militaristic Oh\nAlexandra see me Dante oh hi Riley\nterrible I don't know I uh I've heard\nhim called Alexander the Great\nif it's the same guy I think it is the\nsame guy maybe this is his cousin\nokay so horses I guess we don't have\nhorses or any horses nearby any of our\ncities right now different though\nadmittedly we all also in the modern era\nsay we don't really care about horses\nthey have become obsolete that's like\nyou guys will soon become oh yeah\nI keep forgetting that so what is this\nguy doing I'm cosplaying what when I was\nlittle are you uh I mean right back at\nyou\nMontezuma he's got the crazy eyes also\nthe crazy held yeah another herb just\nhad a deal crazy hat I mean I like his\nhat I have no criticism whatsoever about\nthe Hat Montezuma the terrible everybody\nis terrible I mean you look at this dick\nhole the thing I like is that he's\nfriendly I would hate to see him when\nhe's pissed up I'm gonna move from it\nlike around here so I can just see more\nI've discovered the Grand Mesa yay and\nit's now we're actually happy\neverybody's pleased about this a big\nrock may be good\nwait hang on let me have a look at it\nwell it's a pretty big rock say oh it's\noh yeah it's taking a while to load\nno don't Washington okay it's his\nearnest hope that our two people can\nlive side by side in peace and\nprosperity that sounds good\nI just hope yeah no sugar for cotton one\nfor one happy with that yeah there's a\nsugar is a luxury happiness is good we\nneed that too\nArtie we are the British perfect yeah\nthere it should make tea as a that's\nprobably mod that does that tea you\ncan't grow tea no this game is just de\nfraud so in order to be successful in AI\nin the late game mm-hmm what are good\nthings for a civilization to have so\nfirst of all you want to make sure that\nthere is enough safety a safety\ndiscipline being done it requires a\nsafety Labs when you can only have one\npill city right so having the feel\nnumber of cities is not bad\nokay having a bunch of alliances with\ncity-states gives you access to the AI\nresearch which is not bad okay yep that\nmakes sense so like if there are safety\nresearchers in Dublin and we're friends\nwith them we can use their safety\nresearch yeah and there and they\npreviously searched as well\nokay of course you want to have a good\nscience base we discover all the\ntechnologies that speed up your research\nmm-hmm so you know how citizens get made\nhow the citizens get made you know we\ncan I get it get into it oh well yes but\nreally what happens is you have lots of\napples and then your little circle gets\nfull of apples and then those apples\nturn into a citizen ah and then your\ncircle gets emptied of all the apples\nand you need to start over again\nyeah that's that's pretty much how I\nremember it from biology hi Ischia are\nyou aware that your city is on fire\nmaybe you should attend to that first or\nis this somebody else's city I think I\nmean they do get triple gold from\npillaging cities right let's focus on\nwhat's the city okay yeah you don't\nshrink for more unnecessary or fun or\nexpedient okay\nagain friendly not really seeing it\noh and he was fugly he was friendly I\nhad a I you know I said I had a bad\nfeeling about him must be of this cult\nall right so I get to choose between\nsaying you'll pay for this in time well\nvery well which do you think is more\nfoolish I mean the second one but like\ndoes this have any game impact no pretty\nokay well that's that's nice\ndefinitely need to make sure that we are\nmaking some units yeah you see that's\nthe thing you have limited amount of\nresources yeah\nMontezuma shows up and you can't spend\nit on me\nit's sort of symbolic the way that the\nbiggest the game that deep mines dqn\nstuff has the biggest challenge playing\nis Montezuma's Revenge I haven't thought\na bit but it's and now Montezuma\nscrewing with Rai stuff may be somewhat\nbeautiful damn you Montezuma we will\nhave our revenge\nslightly excessive pulse\nI don't like how militarized this whole\narea has become has become quick Meadows\nwhere you can kept other local oh they\nmove closer that was really nice of them\nidiots wonder if anybody comes we're\ngoing to declare war on us I hope not\noh these guys oh yeah taking everything\non in fact they can kill off those\nthat's hilarious\nyep because railways are so cheap to\nmove on yeah it's true but they\nrepresent it by running extremely fast I\nguess we need soldiers yeah you can be\nbombers yeah\nI'm gonna screw up on Tajima with\nbombers Oh has it really\noh wow long as we give him all of our\nstuff that's Hillary I love our stuff\nthat's really an amusing offer it is\nI think that's are you tempted uh you\nknow sorry I just look in his face with\nwhat I think is not making eye contact\nbecause he looks super distracted what\nmm-hmm\nwhat's over there what's over there I\nthink it just looks shifty yeah I think\nthat's like I am bluffing I don't know\nif you know I am bluffing Baba I like\nbombers hang on are we already building\na bomber in Nottingham yeah but you can\ngo up to as much oil as you have and\nwe've got plenty of oil yep\nyeah I like bombers - they're cool but\nI'm really annoyed that we have to do\nall of this crap like military stuff\ninstead of it just feels wasteful you\nknow livings just burning a bunch of\nresources on each other\nI mean listen that's what we do yeah\nwhat maybe he should sue for peace in a\nway that isn't just give us everything\nyou own really really that's mean and\nit's just something necessary why oh\nokay but that's that's not good what did\nI do\nokay oh that's not aggression leaves us\nno choice\nwhat are you talking about 1:10\naggression what's he talking about I\ndon't know I think he just wants to do\nsome done Cup and he's making up excuses\nnone of them like ends whatever do\ndoesn't my life no that's not a thing\nthat happens but this is you know this\nis Washington the terrible he's become\nfour terrible he has become though he\nwas a doji before yeah they aren't so\nwarmongering what in the hell is going\non I mean to be fair when we went around\neveryone was militaristic and that\nshould have been a clue I don't like\nthis okay very well okay well he has a\nlot of troops around all cities yeah\nwe're really in trouble yeah yeah we are\nlet me just is anybody not at war with\nus ask you I don't even like them I mean\noh we screwed at this point\nI mean we are not gonna get to suit to a\nis not anytime soon but no one else is\neither so this can just play out and\nthen we get back on that back on track\nyeah okay all right wartime I mean it is\n1943\nyou kind of expect there to be a world\nwar going on but it's not a world war at\neveryone at war with us that doesn't\ncount as a world war is there a way to\nknow if any of these guys are at war\nwith each other and I think we would\nhave known I think they are not lunders\nmade Obama\ngood job London does that mean we have\ntwo bombers in under now yes nice buy\nalthough production so so I don't know\nhow did we find that out oh into the\ncity huh because they've taken all of\nthe people and stuck them in the\nuniversity and in the public school yes\nif you take them out of the university\nin the public school and back into the\nmines\nI mean it's war dig for victory now to\nsee units here and do ships which\nprobably means we can just get rid of\nboth of them meat this is a slightly\nconfusing game to play sometimes yeah\nokay okay hang on there are no more\noffensive units no Canton boy yeah there\nwas well done that's hilarious man naval\nsuperiority is kind of a laugh yeah the\ninitial four or Greek troops just didn't\ndo anything\nyeah luckily for us it's embarrassing to\nbe honest yeah please\nget rekt I like that it's like a\nLancaster bomber\nyeah I gotta say I remember the second\nworld war going quite differently I\ndon't think we were at war with the u.s.\nOh what with the u.s. kind of on the\noutskirts of Nottingham not far from\nDublin right oh good nice okay all right\nokay I'm feeling this worried about this\nwoman yeah I mean we are still at war\nwith literally everyone we've ever met\nexcept somehow except Songhai for some\ncompletely unknown reason the like by\nfar most warlike looking of all of them\nmaybe except one does it yeah with the\nskulls hidden he maybe should realize\nhe's the baddies it's like literally\ncovered in skulls and from that wall of\nskulls and it's like so be sure that\nwe're on the side of the good here right\nwhat as though those soldiers outside of\ntoxic land and soil found them what what\nis the Greeks doing all the way over\nthere all right fine yeah\nartillery get him\ngood it's always funny when I tell her\nhe just completely destroys one guy I'm\nsorry\nrandom American soldiers it's not gonna\nbe your day they should not have the\nCold War it did seem unnecessary but we\ndo now have a stupid amount of like a\npower so yeah maybe people just cutoff\nladles hmm that guy's scary let's piss\nhim off okay go over there and finish it\nyou're cowards and if it'll make you\nfeel better a great general is gonna\nhelp you out oh my god he gets a little\nJeep yep\nthat's adorable and a flag and a flag\nhe's driving a Jeep on the railway\ntracks yep he doesn't care he's a great\ngeneral don't you gonna listen yeah\n[Music]\nyep please continue getting read\noh yeah it's not off those bonds didn't\neven explode I think they literally just\nphysically hit them with the harmless\nhe's not even it's thinking about yeah\nnice this is still so wasteful though\nit's completely consumed all of our\nresources for like many turns true\neverything's not even 45 it's time to go\noutside\nso I can to make peace oh yeah do you\nthink they'd do that deliberately No\nOh what there's my whatsapp is pretty\ninteresting\nI accept that deal that seems reasonable\nvery reasonable\nyou're like randomly yeah back to being\nfriendly what in the hell is up with\nMontezuma are you kidding me what\nweakness I have like seven bombers you\nhave a sword what are you doing you\nthink we should not have trusted the guy\nwith the burning citizen like oh maybe\nlisten Askia if I'm honest I've never\nheard of Songhai I know this is\ninsulting\nwhat is that what is where in the world\nwho was Songhai I I feel this I mean I'm\nrevealing weaknesses in my history\neducation what is where is do you know\nthat no no well ok we'll deal with this\nguy we're very very well well jolly good\nshow more Wars that's fine\nyou know we don't mind a bit of a war\nand now Florence is declare war on us\ndiscover computers finally a great\nscientist has been born in the city of\nNottingham how about that what is the\nname of the scientist in Nottingham\nwhere's my scientist show me my\nscientist Pharisee is behind those\nscientists often out hiding behind your\nHilary\nthere is no doesn't it well the general\nhere well it's like of course oK we've\ngot five minutes he can rush the\ntechnology\ncome let's look at the tech tree so we\ngot computers no now we can rush I yep\nthat's good discover technology yeah and\nthen you have that golden science file\nokay right right right right and I can\njust oh wow I could get any of these\nthings any of these things but I think I\nis the most powerful\nyeah and it's do it alright we're in it\ndoesn't win the game the question of\nwhether machines can think is about as\nrelevant as the question of whether\nsubmarines can swim okay lets us build\nan AI lab good so maybe I should explain\nthe technology and we'll give\nintelligence it's like well what's the\npoint you'll have discovered the fish\nintelligence so that he said it's the\nfield of artificial intelligence so it's\nsomething like the tearing paper in mind\nor the ran of Dartmouth summer school\nit's like hey we have computers now\nmaybe we can use them to solve this\nthing maybe they can think yeah like a\nsubmarine yeah so we've got AI if we can\nalso get robotics and then we can get\nthe orthogonality thesis yeah which is\nnecessary if we want to build the AI\nsafety lab because right now we can\nbuild the AI lab but that's just gonna\ncause a bad outcome how many safety\nresearch having discovered artificial\nintelligence your researchers may now\nstart working towards super intelligence\nmanage your research through the\nartificial intelligence screen you guys\nmade a whole screen yeah Wow you can\nclick the thing and it will bring up the\nscreen yeah okay so a I research level\nis zero because we haven't even finished\nour first AI lab mmm from local research\nalso zero from open research by others\nalso zero and I don't know who's gonna\nhelp us because America is at war with\nus the only people not at war with us\nare the Aztecs\nyeah with even people who are at war\nwith you if they publish uh-huh you can\npick up those so a treaty only I would\nshare all research and guarantee that\nvalues of both civilizations are built\ninto the AI being developed I haven't\nsigned any treaties yeah cuz everyone\nbut want to zoom is it war with me I\nthink you said in the in the interview\none thing you've discovered is that if\neveryone's at war ai cooperation becomes\ndiff\nyeah maybe we should just find out that\nyou can get to that screen both on that\nAI count though that has now appeared on\nyour top oh yeah so you can kind of\nquickly see that and it's also if you\nlook at the kind of menus so the little\nscroll icon yep yeah and then AI is ah\ncool okay if you click on the thing\nbefore then it just tells says AI has\nnot been developed yet okay so it's time\nto build some AI labs and some AI safety\nlabs well I mean it's time to maybe not\nbe at war with literally everyone I\ndon't know man you've done nothing to us\nof consequence apart the worst thing you\ndid was when that destroyer destroyed my\ninfantry that was annoying Montezuma\nnothing else you've done do you remember\nwhen I steamrolled at three of your\nunits on the ocean and never mind even\nhis horse looks at me it's not the time\nfor negotiation alright alright I guess\nwe're gonna have to kick some ass you\nwill never get horses it's never gonna\nhappen\nwell I sued for peace man he didn't want\nit it is true so London is not making\nthat London's gonna make an AI lab do\nyou think it's realistic for those of\nyou never been London well I mean for\nrealism Nottingham should have the first\none but then after that I think that\ncould be won in London so this is an\nidiot but okay I mean I admire your guts\njust spraying all over the jungle oh wow\nwhat he's the father war ah I'm inclined\nto accept I see no reason I've always\nwanted horses now we have all of them\nlike Alexander he's bound to lose now oh\nyeah yeah for sure\nyeah oh he's a dodge again yeah I'm\ntechnically I'm still at war with Greece\nhe's just rubbish at it yeah why do\npeople keep declaring war on me when it\nworks out terribly for them every time\nthey'll fight you'll win well I should\njust deal with it\nah another ball\nwhat is our economic adviser say a\nwindmill really oh we could build\ngreen's windmill it's it's in Nottingham\nthere's a windmill mm-hm and it's the\nscience windmill it's gonna give us\nextra science I mean it will speed up\nbuilding science buildings so you got\nthat good building greens windmill okay\nsince we're very pleased with that\nI am I used to live right near it on\nschool it's like a little science museum\noh I want these guys to build well they\nknow well so that we can have more\nbattleships no wait\nwe are still we're technically still at\nwar with Greece yeah I feel like\nAlexander would be like deeply offended\nthen I keep forgetting we're at war with\nhim just like oh yeah we are at war with\nyou it's just like it not doing anything\nthreatening Nottingham with a windmill\nit built the windmill yes we have greens\nwindmill good now all of our science is\ngonna be extra sciency yes huh\n[Music]\nsure yeah big idiot ah okay the world is\naround you finally becoming year's\nslightly less polluted yeah yeah bombing\nthis out of people until they stop being\nso damn belligerent okay it's it's Louie\nde Guerre\nit is inventor of the photography wasn't\nhe well clearly he should be rushing\ndeep blue when the time comes do de\nGuerre\nFrench artist and photographer recognize\ntaste invention at the daguerreotype\nprocess of photography neat okay he's a\nphotographer he's in Nottingham does it\nmatter where deep blue is built and well\nthen he deep blue is built in Nottingham\nyeah yeah rewriting history sleeping bed\nuntil you discover day for money okay\nalright we're researching the orthogonal\nT thesis you know we put all of this\nextra information in the civil of media\nso if you don't know what the [ __ ] man\nthis is either way if anybody didn't\nknow what what the Greeks they came up\ndammit\nyeah so if you didn't watch my most\nrecent video about the ocean canal t\nthesis uh-huh you could watch that or we\ncan look in the hope they go the dicks\nokay stuff yep\nmodern era Syria so this is a technology\nan area of research that lets you build\nthe AI safety lab to reduce the chances\nof rogue superintelligence eliminating\nthe human race in Cleveland has this\nthat's pretty great if statement of the\nthorn ivy this is from Brian\nintelligence and final goals are\northogonal axes along which possible\nagents can freely marry in other words\nmore or less any level of intelligence\ncould in principle be combined with more\nor less any final goal okay\nhow's that for a hidden message in a mod\nit's very subtle yeah\nif you think this is important I mean\ncheck out this guy's channel\nwe may watching a video on my channel so\nthat is true so we also have\ndescriptions for artificial intelligence\nyeah this is really nicely done\na couple of mr. Shafi actually hmm yeah\nhe deserves credit though you need data\nmining to build deep blue mm-hmm so\nthat's good let's get some of that how\nare we doing on our tech tree we've just\nstarted on the other naughty thesis yeah\nso does that mean I need to\ncollaboration yeah no all the way back\nto the that's so I actually need ecology\nyeah but before that you need penicillin\nin the plastics oh no yes so all of that\nah I thought I was so close to like\ncracking the code not quite doing it but\nit turns out well you see so in fact\nafter AI if you want you can just drop\noff agonizing theses and go down the\nother word to get capability research if\nyou just trust someone else to do safety\nfor you so you went kind of from AI and\ndown to whole buttocks to automatically\nsees mm-hmm but after a yeah you could\nhave just gone back kind of to\npenicillin plastics ecology\nglobalization data mining mmm so you\ncould do just AI capabilities with never\nbothering safety right because you just\ntrust that other people will handle it\nfor you yep mmm I don't trust anyone\nelse because literally everyone\nliterally every other sieve has at some\ntime or another declare war on me for no\nreason\nyeah this is not a publicly owned\nplastic world yeah so I feel as though I\nfind gonna do AI right I got to do it\nmyself\nit's gonna be a made in Britain AI oh oh\noh oh so those things maybe they'll be\nfriends with us maybe maybe they'll\ndeclare war on us for no goddamn reason\nI want to build in fact I'm gonna do it\njust because I like the idea of the\nStatue of Liberty being in York so I'm\nactually a peace with these guys no\nthat's a novel\nnow I'm actually a piece now with more\npeople than I'm at war with yeah so feel\ngood about that where is Russia anyway I\ndon't know we haven't found them\nwe freakin me I feel somewhat stealth\nRussia who gets over here nobody knows\nfor sure okay\nI mean somebody big\nwhat is Russia we just don't know okay\nokay so Hastings needs to decide what to\ndo yeah I think they should have an air\nlab as well of course everyone should\nhave an atom yeah one for you and one\nfor you everybody gets an AI lab now\nI'll keep him guarding because okay the\nRussians could come from anywhere\nbecause they don't know where they are\nso so just head up head off and head off\nthat way okay I think we found the\nRussians ah there it is there's the\nborder yeah okay well that's one way to\nfind whoa ah\nthey were coordinating with each other\nRussia and the Greeks decided to do\ntheir big attack all at once huh that's\ncute\nI'm really tempted to do is take these\nhorse people though so as I could do it\nin one turn with these you know what I'm\ngonna cuz that's gonna really really\nannoy him we're ten percent of the way\ntowards AGI so the risk going so we're\ndoing 84 of it we're doing actually all\nof the AI research aren't we no it's\njust of our research it's all I'm\ngetting from any gotcha cuz everyone's\nat war with us danger of rogue super\nintelligence is 98 though yeah\nso there's somebody else out there is\ndoing AI research yes and it's weird\nthat we know that because we don't know\nthat right yeah that's true but you kind\nof it's really bad flinging bad\nconditions on players without letting\nthem know it's happening yeah so I mean\nin reality we have no idea how close\nsuperintelligence also super intelligent\nleast correct having a clear number that\nyou're aiming at is yeah nothing like\nreality but it makes a lot of sense from\na game mechanic design yes and kind of\nin general in the mode you kind of have\nto go I will realistic this is important\nto us to capture light or this is gone\nif we do this way it's not gonna be fun\nto play right yeah that's the same thing\nI think people people they ask like okay\nso when is it gonna happen how long is\nit gonna take\nwhere are we right everybody wants to\nbelieve that we actually know how far\naway we are\nit's unknown unknowns right yeah if we\nknew exactly what the problems were that\nwe needed to solve and how long it would\ntake to solve them we would already be\nmost of the way to solving them that is\ntrue mmm-hmm I mean looking at record of\nhumanity and we are not very good at\nforecasting technology progress that's\narticularly for things that are brand\nnew my yeah look at development of\nnuclear weapons you had some people kind\nof actively working on it other people\nsaying it's never gonna happen\nyeah applications of electricity I mean\ngo back as far as he wants to really\ntransformative technologies if you have\na rough idea of how it's going to walk\nthen you have some timeline in your head\nbut a lot of it depends on things that\nyou don't know until they're gonna try\nthem and if you don't have a timeline in\nyour head then you just have some have\nsome vague arguments about what about\nwhy it's never gonna happen\nyeah it's often the distribution of\npredictions that you have but here in\nthe land of fiction and games we know\nthat we're ten percent of the way there\nyep but whether the risk is outpacing\nour own progress right which it would\nmake sense if a lot of people are\nworking on it so to clarify the the\nrules here if this hits 800 before this\nhit hits 800 everyone dies we're screwed\nokay if we're trying to get AI right and\nspecifically I safety what are the\nthings we're gonna need so you gonna\nneed their safety labs right if you can\nbuild one to discover Logan Rd thesis\nokay I think it's not very far in the\nfuture so what I'm gonna want once I\nactually have AI labs I want to have AI\nsafety labs is research capability right\nso is there anything I can build now\nthat will increase my research case\nwe've maxed out on currently available\ntech you have the University in the\npublic school yep and you have any scope\nof the finger lets you build a social\nobject right but you will also want to\nhave lots of population so you can have\nexcess population to put as specialists\nso I want happy so that's okay happiness\nand food right so aqueduct and stadium\nor both relevant to that yeah and you\nwould want to have money so you could\nput it into treaties and the safety fund\ngotcha\nall right well I'll go with the\naqueducts then because it's super quick\nsure nice hey we've researched the\northogonality thesis the greatest task\nbefore civilization at present is to\nmake machines what they ought to be the\nslaves instead of the masters of men Wow\nwhen was that written sugar look it up\nhmm oh so much sure we want slaves but\nwe definitely want done for them to be\nmasters like yeah I was thinking that\nlike slaves feels very anthropomorphic i\nI envision well-aligned AGI as just\nwanting the things that we want so that\nwe don't need to enslave it or control\nit it's free to do what it wants and\nwhat it wants to do is good things I\ncan't really like their minds in the\ncultural novels oh yeah as some kind of\nif we get it right this is what it might\nlook like we have this kind of other red\nwants what we want so these are novels\nby EMM X writing in banks they're\nofficially recommended novels there you\ngo yeah yeah I agree though the culture\nseems they have a good it's not perfect\nbut it's sort of a plausible good\noutcome yes I mean as with anything this\nis beyond the pale or a transformative\ntechnology we don't know what the\noutcome is gonna look like but it's nice\nto have some positive vision that don't\ninvolve slaves or mussels right I agree\nso we can build a are safe collapse this\nis exciting times\nwe're now at 102 ai and 118 which\nprobably means it's not\nin a lab in a city-state but just minor\naccidents so we have made it so that\nwhenever there is a research right you\noccasionally get some chance of extra\nrisk being generated by any of the labs\nokay\nit's kind of someone forgot to on the\nset of tests before committing the code\nlike what would be an example of the\nkind of thing you're thinking of so say\nthere is a just a software bug in a\ncommon AI framework okay right yep\nit goes in there it goes undetected\nmany years down the line either a human\nadversary decides to exploit that\nvulnerability or a system that's under\nan optimization pressure finds a way of\nexploiting the vulnerability mm-hmm so\nit's just a little bit more risk within\nyour system because you haven't designed\nanything in advance to be as safe and\nsecure as possible right so things like\nkind of terminating strings with nulls\nat the end other than having the length\nof the beginning opens up the whole\ncategory of of a flows right eating\nitself is not doesn't cause any harm but\nit just increases the risk of something\nbad happening followed down the line so\nin the real world then we probably have\na huge amount of that because most\npeople writing most software including\ni/o software are not are not making an\nextraordinary effort to make sure that\ntheir code is very secure and robust yes\nthere is a paper Wu's name of which are\nnot currently remember I think it had\ndemons in it this one I'm just making\nediting work for myself now I'm not\ngonna do that okay yeah no but it kind\nof they serve a security vulnerabilities\nin common\nai frameworks and they find the whole\nbunch of them interesting okay that's\ncool I will link to that paper if I\nremember to I guess just more\nbombardment mm-hmm that's fine\nand I'm gonna use it to screw over these\nhorses because that's my number one\npriority right absolutely the people\ntrying to capture the horse okay no I\nlove horses that's a true Englishman I'm\na friend to all animals in the AI screen\nI can get to either by clicking on the\nAI pocus ball or fondle you can now\naccess the air safety fun ah so there\nare no AI safety labs in the world so I\ncan invest in a city state to establish\nan AI safety lab do I have the money I\ndo not have the money you don't have\nthese are quite expensive yeah okay huh\nthere's no point in this conflict\ncontinuing any further except see once\nquite a bit of your money once all of my\nmoney okay how about we just you know no\nit's no hoax alright I mean I've got a\nlot of bombers that I don't feel like\nselling and not enough Russians to\nattack so I'm fine with that\nYork has finished its safety lab yeah\nokay okay let's go into the city\nyeah okay so you're seeing how all of\nthese specialist buildings yep have\nspots next to them\nmm-hmm so you can manually assign people\nto walk in them so right now we have a\nguy in the factory and in engineering\nthe factory yeah and nothing else\neveryone else is out in the fields\nproducing stuff gotcha\nnow both the AI lab and the air safety\nlab have a few specialists lots of I\ncan't lie empty mmm so right now we are\n[Music]\nstill ahead on rogue points yeah so I\nwant to do more safety and less regular\nresearch right so I'm gonna just the\nfact that this is an engineer is that\nmore powerful than just putting a dude\nin there like he's orange instead of\ngreen or whatever so good people walk\nthe tiles in the land mm-hmm so that\ndoesn't for example it's in everything\nfive food into gold right he's a\nfisherman yes and he's a fisherman in a\nplace that actually has reached unlike\nthis guy who was a fisherman in a place\nthat doesn't have any fish dude stop\nbeing a fisherman\ninstead come with me and be a Fisher of\na I know he's unemployed yeah you could\njust click the you don't like it you\njust click that thing yeah oh nice cool\ninteresting a bunch of silver and gold\nI'm okay with that\nI guess they she's completely fed up\nwith being at war right yeah also she\nhas no more units on our borders do bomb\nyeah yeah we really did just destroy\neverything if you had any units I would\nsay let's make war with Greece you know\nto just finish them off too but that's\nthat's fine I accept\nyeah you're welcome\nidiot Oh actually I should I should talk\nto Greece before we get into this hmm\nyeah yeah yeah I think it is yeah I feel\nsketchy hey do we want anything else\nfrom them a research pact they don't\nhave the money yeah cuz you've been\nspending it all on units that I\nimmediately destroy no another woman\nwhat do you what do you want what do you\nwant to end it all right I'm gonna bomb\nthe hell out of spotted end I mean I\ndon't see that you've really left me\nmuch choice it's funny they're\nprerequisites for the orthogonality\nthesis because you would think it would\nbe you all you need us to just think\nabout it a little bit\nhmm did we just destroy their Jets yeah\nwe did\nnice so how do you think about the\nthought of my thesis is a bit hard\nwithout having thought about AI as a\nthing yet mm-hmm and also having you\nguys a thing but not having thought for\nthe button politics very much I guess in\nso my phone calls or most aliens when\nyou have these systems that argument to\nbe in environments and doing various\nthings sure yeah that makes sense\nI guess it's it's kind of interesting\nlike it's very hard to know which ideas\nare obvious because there are so many\nthings that seem obvious in retrospect\nyou know I shouldn't be bombing this guy\nI should be taking these these horse\npeople are the absolute priority what am\ni doing good yeah so imagine like early\ndays of a guy that deals that you're\njust gonna hard code everything with\nsuch a system safety concerns are not\nobvious sure the system only does what\nyou tell it to do yeah and that is still\nkind of received wisdom about about AI\nyeah but once you start thinking with\nrobotics you realize it's something like\nit's enforcement learning hmm he's gonna\nbe a lot more salient\nnow we're gonna machine learning is only\ngonna come up much later in effective\nbut I think the earliest this when you\nstart mixing the idea of having\ncomputers think for themselves and\nrealizing that this is gonna be much\nharder than just coding everything by\nhand right that actually there's a level\nof unpredictability there yeah which\npeople didn't realize how early on yeah\nI think the maxim of kind of how things\nare easy and easy things are hard came\nfrom people who were very much working\non robotics\nright yeah that's really true the things\nthat seem easy to human beings are the\nthings that we've been doing for so long\nthat we're kind of like running in\nspecialized hardware yeah they're easy\nbecause we don't have to think about it\nbecause they're things that we like\nhighly optimized to do yeah and that\nincludes some of our morality all right\nwe just know that something will make a\nsuper embarrassed or just feel really\nguilty right yeah I guess that is like a\nlow-level support for for morality it's\ninstinctive yeah so it feels super\nobvious but it's not gonna be easy\nobvious to call this into an AI this\nthing that was put in place by a\ntremendously long process of evolution\ndoing game theory with all of these\nspecialized ad-hoc twists and turns and\ndetails finally sitting here chatting\nabout morality as you're forming the\noutskirts of Sparta they're not people\nthough yeah but there are relations of\npeople and they have been mean to you\nthey've been mean to me and they refuse\nto make peace yes and they are like not\nonly are they simulations of people but\nthey're like they're not accurate\nsimulations of people either you know\nthey're not high detail enough to have\nmoral weight just want to look at him\nagain hi hi I was going to his friend\nyeah that's his friend's face yeah so so\nare you going to cities on his side what\noh right yes cities and then pick Dublin\nDublin Dublin no you don't know you're\nreally really into Dublin okay the\nAztecs way into Dublin that's fair\nDublin's nice I mean it's a nice place\nSusan whispers okay yes yes whatever\nBasin yeah yeah cause it's fresh yeah it\nactually tastes different though yeah\noh I mean I could there any horses\nno no I'll leave him I'll leave him it's\nall kind I don't like killing civilians\nfor no reason so um I feel super green\nSuns like being killed for no ism get\nout of the temple what are you doing\ngetting the lab so what if we have to in\nthe factory - in the university that\nseems sensible yeah maybe you had one in\nthe Woodman so can we give any minute\nnumbers yeah yeah that's greens one no\nwe need somebody in that Oh\nyou've caught up yes we may actually win\nthis yeah I feel good about it does seem\nlike no one else's in the race which\nmakes it a lot easier to link yeah I\nfeel terrible it's not the thing is it's\nnot Sparta's fault that Alexander is an\nidiot I'm just gonna sue for peace one\nmore time I keep saying that behind you\nhuh ah finally thank you\nhow long have we been at war with Greece\nI think since since the forties it's\nbeen 30 years it's a 30-year war make\npeace yeah you can do this with all the\ncity-states now yeah I'm at war with\nFlorence who I'm pretty sure I've never\nseen but fine let's make peace yeah cool\nthanks man and Katmandu yeah hey and\nstuck home wait what oh they're at war\nas well ha did I just make world peace\nyeah I think that's made world peace it\nwas surprisingly easy I don't really\nunderstand the AI safety fund mechanic\nso it's just getting another safety lab\nso say you're going for challenge and\nyou only have one city that means you\ncan only have one safety lab which means\nthat most you have passed six safety\nbelt on nice he really not enough to\nbalance two olds needs but you can use\nthis funds to establish safe it absolve\nthe world I see so if you're if you're\nlow on cities but cash rich you can\nstablish safety lab somewhere else yeah\nand we finished researching ecology so\nwe can get globalization yeah that's\ngood and then night let's ask a data\nmining okay which almost makes sense\noh and now that you build universities\nin all the cities you couldn't build\nOxford University interesting if we had\nmore budget I would have just changed\nits name to Cambridge but does it feel\naxis like odds for them I'm a fan as\nwell not quite as good as Cambridge but\nnot that bad like yeah yeah I I'm not\ngonna say bad things about Oxford but I\nprefer Cambridge day but I in the\nabsence of Cambridge I think it's a good\nidea to build this because it gets +3\nscience and the for the contender free\ntechnology and we really could use both\nof those my a our researchers have\nreached the level of advanced AI the new\nAI manufacturing technique will give you\na 20% increase to production in all\ncities well that's great I'm really on\ntrack to win this one then yep thank you\nhow's risk yeah ai research level is\nhigher than our rogue yeah and no one\nseems to be going over here what is\nsatellites\nreveal the entire map that sounded great\nokay so you can get rocketry okay a good\nrule for rocket experimenters to follow\nis this always assumed that it will\nexplode also plasterer yeah it really\ndoes yeah how you could build Apollo\nprogram ah well yeah you can't because\nwe removed all of that from the game\nokay but that gets you a science victory\nyeah so basically we took the original\nscience figures from the game which is\nyou build Apollo program you build a\nspaceship you want you to office and our\nway mhm and instead we said you in a\nsense victory if you build an alliance\nof intelligence oh so you would notice\nthat you can now build a smart defense\ngrid\nimprove the defense of the city must\nhave advanced AI to build mmm-hmm\nincreases risk of rogue super\nintelligence by 1 per turn ah so if we\nwere still at war and we started trying\nto use our AI stuff in a military way\nthat would give us a big advantage\nbut also increased risk yeah so usually\nthe defense buildings you kind of need\nto build them one after the other so you\nbuild walls and then you build the thing\nthat comes afterwards I think it's a\nmilitary base or something like that and\nthey kind of stack up but walls give you\nplus for defense right small defense\nkids says forget all of that you can\njust oughta make your defenses you get\nplus fifteen to your defense which is\nquite a lot right but hey you can't\nbuild it until you've reach this\nthreshold and B it does come with some\nrisk right because you've got a whole\nbunch of military type stuff being\nsoftware control then you're also\nresearching more military type stuff\nyeah that makes sense\noh he has wine hey and he has excess\nwine yeah maybe he wants cotton for it\nyeah you want some you want some where's\nmy cotton want some cotton yay good and\neverybody's happy because we have wine\nagain once more Britain has wine how\nlong does much rejoicing that's a lot of\nmastic troops well I mean they're not\ntechnologically advanced curious about\nwhat the Aztecs are up to this is it\nmine is mi they're invading a race in\nGreece's land maybe they have open\nbottles with them though oh we have data\nmining that allows us to build people\nooh that sounds good yeah I guess we're\njust being ok with Canterbury being\nsurrounded by our Tech's I mean you have\na lot of promise Hastings has finished\ntheir research lab yeah oh I could build\na military AI lab yeah that gives you 30\nexperience for all units\nyou must have advanced AI to build it it\nincreases risk of rope superintelligence\nby 2 per turn yes oh this is even worse\nthan a smart Defense Grid because now\nwe're talking about offensive\ncapabilities so this is the same there\nis the kind of box that leaves the\nallmovie deletes the military academy\nthat gives you extra experience so you\nknow how all of the indicators got one\nimproved one upgrade as we created them\nyep it's because we started the modern\nage so you have a box in all of your\ncities right plus 30 means that you will\nget added to upgrades just as you create\nthem yeah you know what this is like I\nfeel like there's all kinds of game\nmechanics here that we're just not using\nby virtue of being too good if you hide\ntoo well and there are problems that\nwe're just not even facing like how\nbalance the necessary military benefit\nthat we need to survive with the area\nswell day I risk mmm I'm very\nuncomfortable about this hey the game of\nchess is not merely an idle amusement\nseveral very valuable qualities of the\nmind useful in the course of human life\nare to be acquired and strengthened by\nit so as to become habits ready on all\noccasions for life is a kind of chess\nthanks Benjamin Franklin hmm he's on my\nunderwear actually I'm wearing Benjamin\nFranklin underwear oh it's true that's\ngood to know\nyeah it's a hundred dollar bill\nunderwear but hmm okay so this no means\nthat artists provide +1 n I research I\ndon't have any artists do I know there\nare people who can be put in temples and\nah it's a temple actually didn't I okay\nnow we can build a data center it's the\nfuture future\nit became the future yeah Jonathan\nCoulton was right okay so so they're you\nwith deep blue is that it captures\nsociety's mind so it's not just AI\nresearchers that are now able to\ncontribute to this technological\nprogress\nlots of other people can suddenly become\npart of it right\nit's becomes more of an\ninterdisciplinary thing yes almost a\nsocial movement usually and in the midst\nof all of this\noh hell apparently is breaking loose or\ninvolve us debris I feel like it's gonna\nha\nyou all right hey he has changed his\nfacial expression yeah he looks a lot he\nlooks about the same angry it's just a\nbit different this looks like a\ndifferent kind of angry now look\nsomething I don't know he looks like you\njust spilled his pint yeah this is\neverybody looks very well your big [ __ ]\nand there is a unprotected general\noutside London that the tank can just\nsteam all over\nhe never was very bright was he\nMontezuma\nI spent all this time building missile\ncruises and tanks and things and didn't\nget to use them and now I get to use\nthem bombers awaken\nI feel bad for the house Tech's man I\nfeel embarrassed on their behalf\nIqbal was body nothing I'm cool I'm\ngonna assume that he's the Star Wars guy\nthe fish one\ncome back with bomas is like very\neffective but actually that fun\npiss off Montezuma\nI don't I don't I don't understand\nMontezuma at all how much is this war\ngonna have to cost him before he\nrealized this it's stupid\ncool house takes a little just going on\nbut not something any jets I guess leads\non television\nyeah and they're not gonna get vision\neither cuz I'm gonna blow the crap out\nof everything before it gets close he's\nlike yeah I'm annoyed I'm annoyed\nbecause I'm trying to do a nice thing\nhere spending a lot of resources on air\nsafety research I'm trying to create\nutopia and this guy who still has a\nskull for a hat is coming at me with I\nthink it's just so those on his head\nworld war two he's got a golden skull on\nhim man\nno yes yes this is very much a tricycle\nwhilst the will to fight yeah what would\nit take no okay just it's very very\ndetermined to be stupid and hurt himself\nso I feel like AI safety-wise hmm this\nhas been relatively uneventful yeah\nbecause no one else is gone what else is\ngoing for that going for that wind\ncondition how are we doing we're still\nmore than double yeah so we actually can\nbe quite safe to just max out our AI\nlabs yeah I guess in this scenario it\nturns out that alignment is not so hard\nyes opening I will I'm in medieval yeah\nyeah I mean I guess if you have if you\nhave full control of the project yeah\nthere's only one person working on it\nand you haven't really kind of started\nracing until you've discovered Northey\nthesis and you've put a bunch of people\non the problem to begin with\nyeah they could just come up with a\nsensible solution that would be nice for\nthat'd be lovely yeah something is world\nwilling but kind of actually there are\nworlds like that I've got atomic theory\nyeah cool don't need it you're AI\nresearchers have reached the level of\nexpertise the new AI driven abundance\nincreases happiness in each city by to\nmeet that's good Wow we're doing really\nwell and it's just strange that we're at\nwar with it created what is that oh it's\nuranium yeah it just became uranium yeah\nwe never cared about it before so we\ndidn't notice that the ground was gleich\nthat's exactly right\nokay you know you only see things once\nyou know they're the other yeah no\nthat's true I do feel bad at talking\ndouble-o do only breathing oh right in a\nbridge in Dublin yeah we're gonna\nliberate this out of Dublin ah\nthank us in the future when you're no\nlonger a puppet\nif ridiculous Dalton skull hat man I\nfeel bad though I do suicide is it such\na sadness your move Montezuma it's just\nbeen nuked it must be so pissed yeah\nfine idiot I can't get over how dummy is\nyour eye our research is one of the risk\nof rogue superintelligence is becoming\nhigher consider dedicating more\nresources to our safety to prevent\ncatastrophe well that's because we got\n250 okay we're still about W I'm\ncomfortable with that\nokay 602 now we are steadily advancing\nwe are extremely happy Wow Nikki\nefficient fine\nI'm more about fusion personally in this\nbeautiful oh it is fusion yeah I read\nfission yeah yeah your vision for like\ntwo years now\nokay good you're discovering these\nthings one of you now that's pretty\ncrazy we heavily research oriented\ncivilization that is true\nthis is Britain we care about two things\nAI research and bombers and those horse\nguys so you're making so much research\nnow might just be the last time oh my\ngosh look yeah do it what's happening on\nI already promised it yeah no one else\nis everyone okay fine ah well so it\nturns out that's how you avoid an AI\nrace yes you have no one pursued other\nthan you yeah perfect\njust make sure everyone's incredibly\nmilitaristic I can start researching\nfuture tech yeah you're done you done\nwith the research tree much left for the\nAI to do for you yeah yeah I guess you\njust became a tech speed speed down the\ntech tree use that to gain follow up and\nthat you're going down the tech tree\nyeah everyone else decided to go mean\nturistica it backfired because we had\nsome priority yeah so uh good wouldn't\nbe so easy up because it passed 500 is\nthat yeah but we're nearly there\nisn't it weird that\nputting people in the Opera House is\nbetter for AI research than putting them\nin the university I mean have you seen\nthe kind of Whistler gets on your bills\nlike this no comment\ncoming in with tanks now yeah busy busy\nanyway no concern of Falls no we're just\nlike double checking our code yeah you\nhave achieved victory through mastery of\nscience you have conquered the mysteries\nof nature and ushered in a technology\nthat makes utopia Within Reach\nyour triumph will be remembered as long\nas the stars burn in the night sky\nhurrah\neverything is wonderful and the Sun\nnever sets on an artificial intelligence\npowered British Empire everything is\nbeautiful nothing hurts and also for the\nbest in this best of all possible worlds\n[Music]\nthis video took a lot more editing work\nthan my usual videos in part because\nit's a lot longer and also because I\nstarted with about 9 hours of gameplay\nfootage I had to get some new equipment\nto make it all work so I want to thank\nmy excellent patreon supporters all of\nthese people here for making it possible\nin this video I'm especially thanking\nSteve thank you Steve\nI hope you enjoyed the little\nbehind-the-scenes video I made about how\nthis one was put together and I've also\nuploaded the full like 8 hour long\nversion if you want to watch that anyway\nthank you again and I'll see you next\ntime you think you can make anything\nvideo", "date_published": "2018-02-13T17:17:58Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "d58340d0171e881f7c49957a2d3b2c3d", "title": "Scalable Supervision: Concrete Problems in AI Safety Part 5", "url": "https://www.youtube.com/watch?v=nr1lHuFeq5w", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "hi this is part of a series of videos\nabout the paper concrete problems in AI\nsafety it should make some sense on its\nown but I'd recommend checking out the\nother videos first there's a link to the\nplaylist in the description so before we\ntalked about some problems that we might\nhave with AI systems like negative side\neffects reward hacking or wire heading\nwe talked about good hearts law like how\nif you use an exam as a metric students\nwill only learn what's on the exam and\nthen the exam will stop being a good\nmetric of how much the students know the\nobvious question here is why not just\nmake an exam that properly tests\neverything you care about and the\nobvious answer is that would take way\ntoo long or cost way too much we often\nface a trade-off between how good of a\nmetric something is and thus how\nresistant it is to things like good\nhearts law and how expensive that metric\nis in terms of time money or other\nresources for our cleaning robot example\nwe could have a reward system that\ninvolves a human following the robot\naround at all times and giving it\npositive or negative reward depending on\nwhat the robot does this still isn't\nsafe with the powerful intelligence\nbecause it still incentivizes the AI to\nmanipulate deceive or modify the human\nbut assuming we find a way around that\nkind of thing it's a pretty good metric\nthe robots are not going to maximize its\nreward by just putting its bucket on its\nhead or something like that but this\nisn't practical if you're going to hire\nsomeone to follow the robot around all\nthe time you may as well just hire\nsomeone to do the cleaning that's why we\ncame up with metrics like use your\ncameras to look around at the amount of\nmess in the first place\nthey're cheap for the robot to do on its\nown though there are some situations\nwhere constant human supervision can be\nused for example when developing\nself-driving cars there's always a human\nbehind the wheel to stop the AI from\nmaking serious mistakes and this makes\ngood sense\nlegally you've got to have a qualified\nhuman in the car anyway for now but this\ndoesn't scale well paying humans to\nsupervise the millions of miles your\ncars need to drive before the system is\nfully trained is really expensive if\nyou're Google you can afford that but\nit's still a huge cost and it makes a\nlot of projects infeasible a human pilot\ncan safely oversee an autonomous drone\nbut not a cooperating swarm of hundreds\nof them so we need to find ways for AI\nsystems to learn from humans without\nneeding a human to constantly supervise\neverything they do we need to make\nsystems that can operate safely with\nless supervision a slightly more\npractical metric for\ncleaning robot is to have the robot do a\nday's cleaning and then have some humans\ncome around and do a full inspection of\nthe place at the end of the day checking\neverything's clean checking everything's\nin its place and giving the robot a\nscore out of ten for its reward if the\nrobot breaks something throws away\nsomething important or just sits there\nwith its bucket on its head it will get\nno reward so this still avoids a lot of\nour negative side effects and reward\nhacking problems as long as the\ninspection is thorough enough and the AI\nis weak enough that the robot can't\ndeceive or manipulate the humans but\nthere are problems with this too and a\nbig one is that in this type of\nsituation things like reinforcement\nlearning will be really slow or just not\npossible see with a metric like keeping\ntrack of how much mess there is with\nyour cameras the robot can try different\nthings and see what results in less mess\nand thus learn how to clean but with a\ndaily inspection the robot is operating\nall day doing thousands of different\nthings and then it gets a single reward\nat the end of the day how is it meant to\nfigure out which of the things it did\nwere good and which were bad it would\nneed an extremely large number of days\nbefore it could learn what it needs to\ndo to get good scores on the inspections\nso figuring out how to make AI systems\nthat can learn using a sparse reward\nsignal would be useful for AI safety and\nit's also a problem that's important for\nAI in general because often a sparse\nreward is all you've got\nfor example deep Minds dqn system can\nlearn to play lots of different Atari\ngames using just the pixels on the\nscreen as its sensor input and just the\nscore as its reward but it plays some\ngames better than others it's far better\nthan any human app break out but it\ncan't really play montezuma's revenge at\nall now there are a lot of differences\nbetween these games but one of the big\nones is that in breakout you get points\nevery time you hit a brick which happens\nall the time\nso the score and thus the reward is\nconstantly updating and giving you\nfeedback on how you're doing\nwhile in Montezuma's Revenge you only\nget points occasionally for things like\npicking up keys or opening doors and\nthere are relatively long stretches\nin-between where you have to do\ncomplicated things without any score\nupdates to let you know if you're doing\nthe right thing even dying doesn't lose\nyou any points so it can be hard for\nsystems like this to learn that they\nneed to avoid that how do you make a\nsystem that can learn even when it only\noccasionally gets feedback on how it's\ndoing how do you make a system that you\ncan safely supervise without having to\nconstantly watch its every move how\nyou make supervision scale we'll talk\nabout some different approaches to that\nin the next video\n[Music]\nI want to take a moment to thank my\nexcellent patreon supporters these\npeople in this video I'm especially\nthanking Jourdan Medina a ramblin wreck\nfrom Golden Tech who's been a patron of\nthe channel since July thank you so much\nfor your support Jordan and thank you to\nall of my patrons and thank you all for\nwatching I'll see you next time", "date_published": "2017-11-29T21:47:29Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "009a0e3ce06edbf57b5908092c78d9ee", "title": "Where do we go now?", "url": "https://www.youtube.com/watch?v=vYhErnZdnso", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "okay so let's get right to it most of\nyou are probably here because you've\nseen my videos on the computer file\nchannel but on the off chance that you\nhaven't or you haven't seen them all or\nyou don't remember them all I mean the\nfirst one was like four years ago I\nthought the first video should go\nthrough the existing staff get everybody\nup to speed and also talk about the\nvarious directions that this channel\ncould go next so while we're going\nthrough the videos so far be thinking\nabout what kind of things you're\ninterested in and what kind of things\nyou would want to see more of and leave\nme comments so I can decide what to do\nnext\nalso everyone should be subscribed to\ncomputerphile if you're interested in\nthis kind of thing firstly because it's\na great channel and secondly because I\nplan to continue making videos there in\naddition to these ones okay so the first\ntwo videos I made were about sort of\nmachine learning basics just concepts\nlike optimization and the idea that we\ncan think of intelligence as\noptimization we can think of intelligent\nsystems as systems which optimize a\nparticular function over a particular\nspace the second video is just\nexplaining what's meant by a space in\nthis context people who are familiar\nwith machine learning stuff will know\nthis but if not check it out I could\nmake more machine learning basics videos\ngoing through the fundamentals of how\nsome of the algorithms work and some of\nthose sort of core concepts of the field\nalthough I feel as though those are\nprobably fairly well covered elsewhere\nlike on computer file but if people are\ninterested in seeing more of that kind\nof content for me let me know ok then\nthe third video the holy grail of AI is\nwhere the ideas and the hair start to\nget really interesting it's where we\nstart talking about the difference\nbetween the type of AI that we have now\nand the type of science fiction AI that\nwe think of sort of human level true AI\nand we talk about the concept of\ngenerality the idea of having a single\noptimizing system which is able to\noperate in a wide variety of different\nnames rather than its narrow domain\nspecific intelligence\nwe have now from there we go on to the\ndeadly troops of AI where I start to\ntalk about super intelligence and the\nway that a very powerful intelligence\ncan be very dangerous even giving a\nfairly innocuous seeming goal like\ncollecting stamps there are all kinds of\nareas we could go into from that video\nfor example we know that just saying\ncollect as many stamps as you can is a\nvery bad function to give this type of\nagent but what type of function might\nactually work what might be safe we\ncould also look at containment if you\nhave an agent like the stamp collector\nis there any safe way to run it without\nbeing completely confident that you've\nchosen the right objective function for\nit so the next video is AI\nself-improvement\nwhich is about the possibility that an\nartificial intelligence could improve\nits own code that really only touched on\nthe on the surface of that there's a lot\nwe can talk about there in terms of how\nlikely this is how possible this is what\nthe timescales might be for it happening\nall kinds of questions there to look\ninto if people are interested so then we\nhave the asommus laws don't work video\nwhich you know I feel like I was too\nunkind to ask them off in this video and\nI came across a bit too dismissive but I\nstand by the content of the thing as\nmost laws don't work as a solution to\nthis problem never really did and we're\nnever really meant to the field has\nmoved on and they're not really relevant\nanymore so I don't really I don't want\nto make more videos about that the next\nrelevant video was the one titled AI\nsafety which was sort of a response to\ndoctor Holden's video doc called\nMcCambridge who has another video on\ncomputer file which you also should\ndefinitely watch that video touches on a\nfew different subjects I think the one\nthat has the most potential to be built\non is the question of predicting future\ntechnology and the various problems\nassociated with that so if you want to\nsee more about the the difficulties in\npredicting AI we can make more stuff\nabout that right the next video was\ncalled a eyes game playing challenge\nwhich is mostly about go made that video\nbecause at that time deep minds\nalphago had just beaten the world\nchampion and that video is about the\ngeneral way that AI go about solving\nthese kinds of perfect information board\ngames and why go is so difficult and why\nit was such a huge challenge and such a\nhuge achievement for deep mind there was\noriginally going to be a follow-up video\nto that one about how it actually works\nin some detail which we never got around\nto shooting and there is a pretty good\none on computer file as well but I can\ntalk more about that if people want more\ninsight into how after their works and\nthe last two generally I won't want you\nto fix it and the stop button problem\nkind of go together there about one of\nthe more concrete problems people are\nworking on right now in AI safety which\nis just if you have a system general\nintelligence that you've given an\nobjective to how do you design it in\nsuch a way that it will accept being\nshut down and modified because by\ndefault general intelligences are\nincentivized to prevent themselves from\nbeing modified from having their utility\nfunctions modified specifically we could\ngo into more detail on that some of the\nother approaches people have proposed\nand maybe go slightly more technical\nthan the computer file videos I also\nmade some videos unrelated to artificial\nintelligence like the first one I made\nwas actually about public key\ncryptography if you'd like an intuitive\nunderstanding of how public key\ncryptography works how it allows you to\ncommunicate privately with people\nwithout first agreeing on a shared\nsecret to use as a key check that video\nout I can do more crypto stuff if people\nare interested but I think that that's\nfairly well served elsewhere on YouTube\nbut let me know in the comment there was\nalso the code golf video where I\nexplained the concept of the game code\ngolf and I gave a very short program I\nwrote that made music which looks like\nthis I can't remember how many\ncharacters it is two hundred and forty\nsomething I think anyway it looks like\nthis and sounds like the background\nmusic it's in the background music the\nwhole time I never really fully\nexplained how that code works\nin detail if you want a video on that\nlet me know another thing I'm thinking\nof doing is taking a current research\npaper and just going through it bit by\nbit so that over a series of videos you\nget hopefully as full an understanding\nof it as you would from reading the\nwhole paper there are a couple of\ncandidates the foremost I think is\nconcrete problems in AI safety which is\noften recommended as a good introductory\npaper so if people would like to see\nthat leave a comment I could do stuff\nabout the work idea as a PhD student\nabout artificial immune systems which is\nonly tangentially related but I think\nit's really interesting or completely\nunrelated stuff I once made a robot that\ndeliberately blinds people with a later\nI'm currently working on a game that you\nplay using only your eyebrows I made\nthis battle-axe which is also an\nelectric ukulele like I should make a\nside channel for this stuff anyway where\ndo we go now let me know what you think\nin the comments\n[Music]\nplease we", "date_published": "2017-03-31T20:16:27Z", "authors": ["Rob Miles"], "summaries": []} +{"id": "589b8ea09863fc6c017c1bc563e15ee0", "title": "AI Safety at EAGlobal2017 Conference", "url": "https://www.youtube.com/watch?v=BfcJymyTiu0", "source": "rob_miles_ai_safety", "source_type": "youtube", "text": "this weekend I went to Imperial College\nLondon to attend the effective altruism\nglobal conference the conference isn't\nactually about AI it's about charity the\nidea is like if you want to save human\nlives and you've got a hundred pounds to\nspend on that you have to make a\ndecision about which charity to give\nthat money to and they'll all say that\nthey're good but which charity is going\nto save the most lives per pound on\naverage it's a difficult question to\nanswer but it turns out that there are\npopular charities trying to solve the\nsame problem where one charity is a\nhundred or a thousand times more\neffective than the other it's kind of\ninsane but it can happen because apart\nfrom these guys nobody's really paying\nattention you know people don't really\ndo the work to figure out which\ncharities are actually effective or what\nthey're trying to do so that's pretty\ninteresting but it's not why I attended\nsee there's an argument that if people\nlike me are right about artificial\nintelligence then giving money to help\nfund AI safety research might actually\nbe an effective way to use charitable\ndonations to help the world not\neverybody agrees of course but they take\nthe issue seriously enough that they\ninvited a bunch of experts to speak at\nthe conference to help people understand\nthe issue better so this charity\nconference turns out to be a great place\nto hear the perspectives of a lot of AI\nsafety experts Victoria Krakov nur from\ndeep mind safety team and a wine Evans\nfrom the future of humanity Institute\ngave a talk together about careers in\ntechnical AI safety research which is\nbasically what this channel is about I'm\nnot going to include much from these\ntalks because they were professionally\nrecorded and they'll go live on YouTube\nat some point I'll put a link in the\ndescription as in when that happens but\nyeah Vica talked about what the problems\nare what the field involves and what\nit's like to work in AI safety and a\nwine talked about the places you can go\nthe things you should do you know what\nthings you'll need to study what\nqualifications you might need or not\nneed if the case may be they answered\nquestions afterwards the sound I\nrecorded for this really sucks\nbut yeah the general consensus was there\nare lots of interesting problems and\nhardly anyone's working on them and we\nneed at least 10 times as many AI safety\nresearchers as we've got deepmind is\nhiring the future of humanity Institute\nis hiring actually there will be a link\nin the description to a specific job\nposting that they have right\nand a wine is working on a new thing\ncalled org which is an up yet but we'll\nbe hiring soon lots of opportunities\nhere o some people were there out doing\nthat if animals can experience suffering\nin a way that's morally relevant then\nmaybe factory farming is actually the\nbiggest cause of preventable suffering\nand death on earth and fixing that would\nbe an effective way to use our charity\nmoney so I tried out their virtual\nreality thing that lets you experience\nthe inside of a slaughterhouse from the\nperspective of a cow worst we are\nexperience of my life seven point eight\nout of ten Helen toner an analyst at the\nopen philanthropy project talked about\ntheir work on artificial intelligence\nanalyzing how likely different scenarios\nare and thinking about strategy and\npolicy you know how we can tackle this\nproblem as a civilization and how\nthey're helping to fund the technical\nresearch that we'll need in the\nquestions she had some advice about\ntalking to people about this subject and\nabout doing the work yourself\nhere's Alan Defoe also from the open\nphilanthropy project who went into some\ndetail about their analysis of the\nlandscape for AI in the coming years I\nreally recommend this talk to help\npeople understand the difference between\nwhen people are trying to tell\ninteresting stories about what might\nhappen in the future and when people are\nseriously and diligently trying to\nfigure out what might happen in the\nfuture because they want to be ready for\nit\nsome really interesting things in that\ntalk and I'd strongly recommend checking\nthat out when it goes up online probably\nmy favorite talk was from shahara VIN\nfrom the Center for the Study of\nexistential risk at the University of\nCambridge he was there talking about a\nreport that they're going to release\nvery soon about preventing and\nmitigating the misuse of artificial\nintelligence really interesting stuff\ndr. Levine is very wise and correct\nabout everything to consume it in a more\nvideo engaging way what miles has that's\nall for now the next video will be the\nnext section of concrete problems in AI\nsafety scalable supervision so subscribe\nand click the bell if you want to be\nnotified when that comes out and I'll\nsee you next time shoes is cashews\neverywhere this is a great conference\nI want to thank my wonderful patrons who\nmade this channel possible by supporting\nme on patreon all of these excellent\npeople in this video I'm especially\nthanking Kyle Scott who's done more for\nthis channel than just about anyone else\nyou guys should see some big\nimprovements to the channel over the\ncoming months and a lot of that is down\nto Kyle so thank you so so much\nokay well there's cashews here this is a\ngreat conference", "date_published": "2017-11-16T19:21:00Z", "authors": ["Rob Miles"], "summaries": []}