title
stringlengths
9
81
summary
stringlengths
69
170
link
stringlengths
58
58
transcript
stringlengths
1.66k
94.2k
segments
list
John Schulman
John Schulman, OpenAI cofounder and researcher, inventor of PPO/TRPO talks RL from human feedback, tuning GPT-3 to follow instructions (InstructGPT) and answer long-fo...
https://media.transistor…c00.mp3?src=site
The answer was affirmative. We can get an agent to basically use a set of tools that we give it. In this case, the browsing commands like searchings. I would say I expect AI to be able to do better, a better job than humans at most jobs that humans do now. Five years or so. TalkAulRO podcast is all reinforcing learning all the time, featuring brilliant guests, both research and applied. Join the conversation on Twitter at TalkRL podcast. I'm your host, Robin Chohan. John Schulman is a co-founder of OpenAI and a researcher and engineer at OpenAI. He is well known for major contributions to the field of reinforcement learning, including the TRPO algorithm that's trust region policy optimization, GAE, generalized advanced estimation. Those are from his UC Berkeley dissertation and TRPO's descendant proximal policy optimization, or PPO. His current focus at OpenAI is on RL from human feedback. John, welcome to the show and thanks so much for being here. Thanks a lot for having me. You were literally one of the first people I thought of when I started the show three years back. Thanks, I'm honored. It means a lot to me to have you here today. I definitely remember you were nuts and bolts of deep RL video back in the day and watching that multiple times and gaining a lot from that. You helped a generation of RL practitioners back then. By the way, there's going to be a reboot of the nuts and bolts presentation. I got invited to give a talk at NERPS this year on it. I'll have to revamp the guidelines and everything. That'll be fun. Oh, that's awesome. Can't wait for that. You were clearly one of the earlier pioneers in deep RL. How did you choose to move your focus to RL from human feedback? Why is that an important problem? Why is that important to you? After GB3 was trained, I was blown away by how smart it was and I realized the next frontier was figuring out how to make language models actually useful. I'm still really interested in RL but solving RL benchmarks isn't the end of the story. To use your RL algorithm you need a reward function. Whereas the reward function come from in RL benchmarks, you usually just code up the reward function. But if you're not in a simulator environment, that doesn't work. What we have to do in any kind of real-world use case is have humans look at what the AI did and decide if it was good or bad. How exactly do you define this reward becomes a really challenging and important problem, especially as the tasks get harder to evaluate? Another angle on this is that language models are very smart but it's hard to get them to do anything useful. A big part of that is they're not necessarily trying to do what you want. They're just trying to imitate the training corpus. That means there's a big opportunity to improve them a lot by just giving them the right objective. That's what we can do by applying RL to these language models using human feedback to define the reward. Is human feedback harder or very different in some way than using a synthetic reward? There are a lot of new complications. You have to collect a data set dynamically. You're always in the business of building data sets of human preferences. Often the data quality there matters more than various algorithmic details. You also have to think a lot about exactly how you're giving the task to the human trainers and various other things that you wouldn't have thought about if you just had a programmatic reward function. Does the difference between human-raders or the noisiness of the reward signal cost any problems? I would say the noise definitely you need to be below some threshold of noise to learn anything. I think in general if you have a large noisy data set that can be as good as a smaller clean data set. Actually, noise isn't the thing that worries me the most. It's more that there are sometimes consistent biases that people have. For example, in settings like question answering or settings where you have a model writing some text, often people prefer longer answers. You end up with these very verbose answers. If you're not careful with the instructions that is. You can also instruct people the raiders to reward brevity. But without yet, if you're not careful you can incentivize the wrong kinds of behaviors. So let's move to some of your recent work. First up is WebGPT. Browser assisted question answering with human feedback. That's a Nekano at all with yourself as a co-author in 2021. Can you tell us what is the main idea of this paper? What is WebGPT? In WebGPT, we basically took our language models and we hooked them up to a web browser so they could retrieve information from the web. They can write an answer by summarizing the relevant pages from the web. That way if you're asking a question about current events or a question that requires some detailed scientific or technical knowledge, this AI can go out and look up the answer and with detailed citations to its sources. I would say there's two interesting points to this. One is we were exploring whether you could turn language models into a kind of agent. There's a lot of data on the web of different texts that people have written. But there's not a lot of data that shows how to actually do some multi-step process. So it's not that clear, uprearry whether you can get a language model to actually carry out some iterative process. We just have a lot of data like writing essays and having chats and so forth. So that was one thing we were exploring here and I think the answer was affirmative. We can get an agent to basically use a set of tools that we give it. In this case the browsing commands like searchings, scroll link, click on links. The second theme of this paper was around truthfulness. I mean a big issue with language models is I mean they're not very reliable at giving you true information. They know a vastly superhuman amount. But if you prompt them in the wrong way they'll just output lots of plausible sounding nonsense. So how to fix that is a big research question or one of the biggest research questions in the world of language models. I think it's going to be challenging to fully fix it but I think a big part of the story involves retrieval and having models write answers that contain citations. Citations to try trusted sources. So a person who's checking over the answer doesn't have to go and try to figure out where the model might have gotten this idea. They can go and directly look at the source and see if it supports the AI statement. With WebGBT we just wanted to see if we do give the language model a really flexible interface to the web. Can we have it answer hard questions truthfully using like with the help of all these citations. And it's actually really non-trivial because if you look at the data that we use the Reddit explain it like on five. The questions are really varied like some of them are about science, history, current events. Like our Raiders didn't necessarily know anything about these topics but still they had to judge the answers written detailed answers. So it would have been really hard to do it without the supporting citations. So we kind of validated that we could get good feedback in a hard domain like this with the help of citations. Can you talk about where the idea for WebGBT came from? Is that an idea you've had kicking around for a while or was it something that came up recently before the paper? How did that play out? Some of the ideas had been floating around like we thought that we actually had a project at OpenAI very early on a world called World of Bits. We were looking at controlling web browsers or doing tasks that involve tasks on the internet with the web browser but it was way too early at the time. So we kind of abandoned it for a few years. Actually we were trying to back then we were trying to do it with full visual input. So we thought yeah we could give some instructions to the agent like go and figure out figure out the address of this building or something. The agent would go and search the web or use Google Maps or whatever to figure out the answer. And we were trying to do this all in pixels that obviously didn't work very well. But now we have these great language models on the work on text data. We can also extract the text out of web pages to get most of the information. We can't really interact with a lot of dynamic websites. Yeah, where there's a lot of JavaScript and images and so forth. But as long as it's just browsing and reading text we're fine. So yeah we had good enough models and that made it kind of feasible to revisit this idea of using the internet as an environment. So I would say that was one of the sources of inspiration that long-stinted, that long kind of thread about like using the internet as an environment. Another motivation was just after we got after we started playing with GPD3 we noticed that it had all these problems with factual accuracy and the reliability of the information it was giving us. So that kind of motivated doing more research on how to make language models more truthful. We were kind of brainstorming what to do there and we went through some docs and eventually decided that we wanted to try some question answering like using the web, looking up knowledge on the web to help answer questions. So actually the original version of the project used trivia questions. So there's another, there's this well-known data set trivia QA that has some basic trivia questions. So we first worked a little bit on that data set and tried to see if we could boost the model's accuracy by giving it web search and yeah that actually works quite straight, that worked pretty easily. So then we decided to move on to long-form question answering and so that gave us the, that was the project we ended up working on for a while. It seems like you use a few different data sets here and a number of different training methods. I'll just mention the last behavior cloning, reward modeling, reinforcement learning, and rejection sampling. So we were using a fairly standard methodology which was actually adapted from previous work on RL from Human Preferences. So the pipeline is you first train a model with supervised learning where you you have human demonstrators show how to do the task, like show how to map from observations to actions. Yeah so that's the supervised learning or behavior cloning step then we train a reward model or preference model. It looks at two actions or two out trajectories and decides which one is better. In this case like in a question answering setting you're looking at two answers and deciding which answer is better and we use that to train a reward model that assigns higher score to the good answers than the bad ones. Then you do reinforcement learning against that reward function and of course you can iterate these last two steps. After you do a little RL now you're, you sort of exploited some of the flaws of the reward model like or some of the noise in the reward model and it's not necessarily accurate on your new distribution of data. You recollect more pairs of samples and refit this preference model and then you do another iteration of RL. So that's like that's the whole RL from Human Feedback Pipeline and there's this other idea called rejection sampling or best event sampling and in general you can do other kinds of search too where instead of doing RL once you have your reward model you can just search against that reward model so you can take a bunch of collect a bunch of samples and re-rank them with the reward model and take the best one as your action. Kind of like NPC. Yeah exactly. Yeah kind of depends exactly what setting you're in what you can do. If you're in a setting where there's some environment you're interacting with then you would have to simulate your, you would have to simulate the dynamics of your environment which yeah so that would look kind of like NPC. In our case we were the only thing we had to learn a model of was the human preference so like we're it's a question answering setting so it's really like a contextual banded problem so it's kind of straightforward to take a bunch of sample a bunch of actions where each action is a full answer and re-rank them or search against the search over answers. So in terms of the action space was it the action space just a list of commands or is it still generating tokens like a regular generative mode? We were generating tokens. We had two phases of like in each episode of the RL task so there is first a browsing phase where where the model goes and it issues searches and clicks on things and quotes relevant information like if it sees something useful on the page it'll it'll quote it using this quote commands and then once it's browse it's done browsing it'll issue another command called end browsing and it'll write its answer that's also expressed in tokens but really we rolled this all into one big RL task where your episode involves browsing and writing out the answer and it's all one big RL episode. Did you think this is going to work well or were you kind of surprised? At the very beginning of the project we didn't know if it was going to work or not. Like after we did the initial experiments with Trivia QA which actually didn't take that long to get running then it became pretty clear that it would work that the browsing part worked at least and we already know that we can get these models to write pretty good long form text with a bunch of if you give them a bunch of snippets of text that they they can cite. So I noticed the the the human raiders task was quite complicated as it was a long guide and there was many types of feedback that they were giving but in the end the paper said that only the final rating was used so I was just curious if you hadn't commented about that like why do you think maybe the model couldn't use that extra feedback whereas it was maybe just too much or not enough samples. Yeah that's been one frustrating finding so far in in that project and also some other projects we've had the same finding but you have your raiders go through this long process for each for each comparison they do where they're comparing a pair of answers and then you only use one bit of information from the whole from this whole process which might have taken like half an hour. It seems like it would be better if we if we were able to extract more information more about the process they went through in arriving at the answer. So we did collect all sorts of other information like we had them provide ratings along several different axes like coherence and factual accuracy and so forth but in the end we didn't really get much of a boost out of using any of this this other information so I'd say it seems like there's it should be possible to do better but unfortunately this methodology which seems kind of dumb so far it's hard to be and people have tried various other ideas for like how to use human feedback instead of you getting these preference scores there various other things you can do like you can have them right critiques and edit or maybe edit the responses. Yeah I think some of these things are are also promising but yeah this methodology of collecting preference data works well. Yeah I think it's it's still an open area of research. Oh yeah regarding the really long instructions. Yeah I think for any of these tasks there is a lot of subtlety in how to do the task properly and so we ended up adding more and more details of like what do you do in this situation and what do you do in that situation. I think it's starting to get pretty unwieldy with these really long instruction manuals so there's some promising ideas for how to address this like there's a paper from DeepMind recently Sparrow that used basically broke down the task and they trained they basically had people look at one aspect of the one aspect of the response at a time and and then they had a way of combining these different rule specific they would train a bunch of rule specific reward models and then combine them at the end. Yeah I think there's some other interesting ideas for how to how to make this process better. So I gather that from your answer about WebGPT and the whole idea of WebGPT is that you want the the language model type access to external knowledge but I wonder where you think the line should really be in terms of what a language model should know and what the language model should look up and maybe what the language model should not know or not purport to know. Do you have opinions about that? Yeah let's see like some people are advocating for very small language models that have like no external knowledge aside from language I guess would be the extreme position and then other people other people talked about language models that just know everything as opposed to having an external knowledge source. There's some interesting questions there so I think it is a little hard to separate knowledge factual knowledge from understanding. So as humans we get by like not memorizing all sorts of facts and just knowing that we can look them up if needed. For working on a specific domain it is useful to like have a lot of facts internalized so that you can recall them very quickly and kind of combine them combine them in your head. So I wouldn't take an extreme position on either side I would say I think retrieval is going to be really useful just at the very least for current events but also I don't think we want to try to pack all human knowledge into the weights of a neural net. On the other hand I think people have had a lot of luck just scaling up models and like as they soak up more factual knowledge they also get better at reasoning and other things and I think I haven't seen any demonstrations of tiny models that just do lots of retrieval and save all their weights for reasoning. Yeah I just haven't seen any evidence of this or that or I haven't seen any successful attempts at making this. Let's move on to training language models to follow instructions with human feedback that was uyang et al and that was 2022 with yourself as a co-author. Can you tell us the main idea with this paper? This is the instruct GPT paper. What does instruct GPT and what's going on here? Instruct GPT is a language model that's fine tuned to follow instructions and it's in fact the one that you can play with if you go to the open AI website you get a big text box and you can write some text and then press the button to generate a completion. So the idea here was I mean language models are pretty useful and you can sometimes get them to do what you want by prompting them just right. This idea of few shot prompting has been become pretty popular where you give a few examples like a few question answer examples and then if you ask another question it'll hopefully provide an answer in the same style. So the idea yeah so if you can get language models to do great things with prompting but prompting is itself an arg and it's tricky to get right and it's also kind of not necessarily getting the best possible performance out of the model. If you just take a raw language model and you try to you try to talk to it like you ask it a question it probably it doesn't know that it should actually answer that question as well as possible. For all it knows you want it to give a joke answer or a riddle or something. Yeah so the idea of instruct GPT was let's make a kind of small change for our language models so that they're much easier to use. In particular we're going to train them to if you have a piece of text where there's an instruction the model will try to follow that instruction to the best of its abilities and pretty much anything can be an instruction like you can have a the instruction can be to continue a chat or it can be to like summarize like summarize this text or give me a list of names for my company that sells widgets. Yeah instructions can be anything and that makes that makes this kind of model very powerful. So that was kind of that's the idea of an instruction following model it's like a model that can do anything that you specify with an instruction and by the way I wasn't a core contributor to this work I was more involved with like getting the RL infrastructure and some of the RL training details like helping out with that that stuff. But anyway yeah what we did in this project was we ran this this whole methodology that I just described of RL from even preferences in this instruction following setting. So we did supervised fine tuning, collected preference data, trained a reward model and then did RL against that reward model and one interesting detail is actually whereas the original initial data was just collected using contractors. At a certain point we had the the API and it's got this I mean we have this playground on the website where this is where you the big text box where you can use the model. So we we took prompts that people that users had put into the into the playground and use those for training like both to collect preference data and to do RL. So and this is like this is disclosed to users pretty prominently like when when people are using the playgrounds you get notified that your prompts might be used for the training and we're also careful to train in such a way that we don't memorize any information that was in in the prompts. Like it and it explicit like we have a pretty like elaborate process for making sure there's no like private information being leaked into the model. But anyway yeah that's that's basically the experimental setup and the result was that it works like this methodology works quite well and you get a model that's vastly preferred to the base model on this distribution of of realistic prompts that people are giving the model often which contain instructions. So the raw like the the raw language models generally do a really bad job following instructions but this RL trained instruction following model is is a lot better and it's something like if you just calculate how much better it's something like it's as good as a model that's a hundred times bigger. That's a lot. Yeah. You wanted the model to be truthful is that is that one of the criteria you wanted? Oh yeah truthfulness was one of the criteria. That seems amazing to me that truthfulness is something that I could learn by example like does that mean that truthfulness is somehow represented inside the network or because there's no external way for the model to confirm whether something is true or false. So how how might it know what is what is true without any external reference? I think to some extent there is some internal representation of truthfulness. So I would say like one way to think about what language models do is they're trained to imitate the whole internet and the internet is written by lots of different people and has lots of different types of content from fiction to nonfiction to like like technical like detailed technical literature to like jokes and like forum posts whatever. So what the model is basically an ensemble of all these people who wrote stuff on the internet the raw pre-trained model. When you feed it a prompt what it's doing internally has to be something like figuring out who wrote the first wrote this prompt and then trying to continue in that style. So if it thinks it's reading just reading something on the Wall Street Betts Reddit it's going to continue on that style but if it thinks it's in the New York Times it's going to write in a very different way. So effectively the model must be like calculating somewhere like what style is this or what ensemble what's the like narrower ensemble of styles that I'm trying to imitate now. At the very least when you do some kind of when you do training like either supervised fine tuning or are all from human feedback you can at least like narrow down the set of styles the model is producing and try to imitate like the best or the best person in the training set or the best style in the training set and obviously best will differ a lot. So what we'll end up with will depend on our instructions. So if we if we tell I don't know we'll end up with something that has kind of safe like not too not too controversial but a bit corporate will end up with something like that depending on what our instructions are. So at the very least like we can kind of narrow in on one style instead of having the whole distribution of styles on the internet. I think probably there's more to it than that like we're not just learning about style but the model probably is like internally trying to determine if things are if statements are true or not like if the prompt contains incorrect information because that probably would be useful for determining a likely completion. I'm just talking about the raw pre-trained model so I think yeah I think just the objective of predicting next tokens probably gives you a lot it forces the model to like the determine if things are true or not. I think for our alfine tuning there's a lot more potential for the model to actually like try to output something truthful as opposed to trying to imitate a certain style though it's hard to I guess it would be hard to like determine if that's what the model is actually trying to do. So it's almost like the the prompt is guiding the model it's like what corner of the internet do we want to do we want to imitate here and maybe we want to instruct GPG wants to to focus more on the most more truthful corners of the internet something similar to that. Yeah I would hope so at least I think that's a pretty good though maybe a little simplistic picture of what's going on. At the very least we should be able to imitate the most truthful corner of the internet. So can you talk about a generalization and how does this type of model perform out of distribution? Like I guess if it seems questions that are a bit different than what it was trained on. What happens if we get a little bit away from the training data with the reward models? I mean language models in general generalize surprisingly well and I would say overall like these pre-trained models that are trained on super diverse data sets from the internet. They tend to generalize quite well or surprisingly well at least it's surprising to those of us who were around for the earlier days of machine learning when everything was trained from scratch and very fragile. For example if you ask if you provide an instruction in some other language even a even a fairly rare language it'll often do a decent job following the instruction even if there's zero data in the whole instruction following the training process that's in that language and that's just to carry over from the pre-training. So I think generalization yeah I think language models generalize quite well. So you asked about reward models I think one of the tricky pieces about RL from human feedback is how so you have this reward model and you're actually training against it meaning you're training your policy to have high reward and it's going to exploit the errors in the reward model so it's going to eventually find adversarial examples to the reward model. This is worse than kind of normal out of distribution behavior it's like targeted out of distribution examples so so there are definitely some challenges around getting reward models to generalize well or generalize as far as possible from the training set. Can these types of agents tell us when they don't know something or is that a hard problem? I'd say sort of if you ask a question that's kind of in the core of the model's knowledge it will know know the answer and it'll know that it knows. By the way I'm talking about models like the for the instruct model if you ask it about something that's like very simple at the core of its knowledge it'll know if you there are certain things that it knows that it doesn't know like current events where it's been trained to know that it doesn't know certain things in real time but if you ask it about something that's kind of on the edge of its knowledge it's it's going to have a hard time it's it's necessarily going to be inaccurate. I mean there have been a couple papers about this question so there is in paper from Anthropic recently called language models mostly know what they know and there is also a paper from FHI and OpenAI called getting language models to express their uncertainty and words. These language models as well as a lot of other models in machine learning are training to maximize likelihood so maximize log-prob of data. You're already training them to always predict a distribution of outputs. So for language models given a prefix it's predicting a distribution over the next token. These predictions for the next token like generally are pretty well calibrated but 80% if it puts 80% probability on something and you look at all the times when it puts 80% probability on something like it's right 80% of the time. Like that's just a result of the training objective. The training objective like strongly incentivizes the model to be calibrated meaning it has a reasonable estimate of its uncertainty. So at the single token level models definitely are calibrated. The question is whether they're calibrated on whether this calibration extends to settings where they are generating multi-token outputs or whether they can judge the correctness of some multi-token statement. So I would say since models are calibrated at the single token level they I think they definitely have the information to be calibrated in these other settings. So that's why I think the problem of models knowing what they know isn't actually that hard or at least getting a model to express its uncertainty pretty much as well as a human does doesn't feel like an insurmountable problem but there's some practical difficulties to getting getting there. People use the phrase AI alignment in different ways. Can you talk about how you see alignment in your work on Aral from human feedback? I think of alignment mostly as the problem of getting the model to try to do the right thing so we can kind of make a distinction between what the model is capable of doing. Like if you just take a raw language model and you ask it a question like I said before it doesn't know that you actually wanted to give the correct answer as opposed to. It might think someone who is not very knowledgeable is answering. By doing some extra training we can get the model to actually try to do the right thing and so I would say that that's the main goal of alignment. So there was an open AI blog post recently that talked about the sequence in alignment. One was training AI systems using human feedback to use it training AI systems to assist human evaluation and three training AI systems to do alignment research. So is your current work mostly about this first item and when and how do you see us getting to these other stages? I'm doing some work now on number two training AI systems to assist human feedback. I think that's sort of becomes increasingly necessary as you start trying to get the systems to solve harder and harder problems. When you have models that are kind of very below human level or maybe at human level at a certain task it's pretty straightforward to supervise them. But once they're doing things that are very hard or doing things that require a lot of diverse technical knowledge it becomes pretty hard to provide a useful supervision signal. So we have to start doing things like one model writes an answer to do a question and then another model provides a critique of that answer points out some flaws and then the human only has to judge the first answer after looking at the critique meaning basically the critique helps the human assess the answer. So I think like that kind of idea is starting to become pretty relevant. A colleague's an I are exploring that kind of idea now. As for assisting alignment research there's some other work at open AI that's starting to explore this. It's also that sort of the for this down the road. So I saw Stuart Russell was on your PhD committee and I really enjoyed his book Human Compatible. I wonder if you share the idea mentioned in the book that the standard RL framing with this fixed reward signal is problematic and that agents powerful agents should try to do what we want and maintain some uncertainty about what it is we want and the agents that are too certain will be problematic. What do you have any thoughts on that idea? I totally agree with that idea. So I think first it's really hard to write down a simple reward function that actually captures what we want or what any any particular person wants. I can say I want a little more of this or a little more of that but you wouldn't want to take that to the extreme. If we build agents that try to cater to our to our wishes we should make sure they're like they have a lot of they have uncertainty about what we want or what we value and that that'll also cause them to be a little more cautious and say not disturb anything that might be important to us. So yeah I agree with that like Stuart Russell gave a very good like problem definition of what we want AI to do like we want it to basically we want to jointly like play this game where AI is the AI is trying to figure out what we want and then trying to do that but simultaneously maintaining some uncertainty about what we want. I would say if you you start to look at how to get that in practice it actually looks quite a bit like the kind of RL from human feedback that we're working on at OpenAI and others are working on other places. I think yeah I think I see what we're doing as a practical implementation of getting towards this behavior that Russell have described. Do you think of a AGI as an abstract goal or are we going to see a model come out one day and people are going to say oh that's the first AGI model like what does it have to do for people to say that? I think people will say that many times then realize that it doesn't quite do everything that you want. I think we're going to have a lot of like a long series of models that are that are superhuman at most things or at a certain class of things but they also have some failure modes and weaknesses. Like I expect us to like see multiple models that are proclaimed as AGI and then only after interacting with it a while you do realize it's not quite there. What would you say is the relationship between AGI and RL and AGI and these large language models? How do those concepts fit together? I would say that RL is a useful like component of training AGI or an almost essential component. The thing RL lets you do is it lets you optimize any objective for the agents. Any objective that is a function of the agents behavior. So with pre-training like what we do for language models you're kind of choosing an objective that lets us do something with all the training day we have which is all this internet text. So we choose this maximum likelihood objective which is basically the only or not the only thing but it's like a sensible way to absorb all this knowledge. But then if we really want to optimize the agents behavior for a specific objective RL is kind of the only framework that lets you do that. Okay John we have a few questions from the audience and I'm just going to pick the two that have the highest score in terms of Twitter likes. So the first is from Eric Chang VP of AI at a Hello Di Robotics. He asked RL distributions are non-stationary making it hard to reason about PPO losses and how that relates to return or generalization. Are there any intermediate plots and visualizations you like to generate to debug or incrementally build up a large scale RL system? Yeah there are definitely some stats that I look at so I will be I'll talk about this in the nuts and bolts like reboot waited a year but I'd say things like you're looking at the explained variants of the value function and looking at the like how many samples are getting clipped in PPO and what the KL between the what what the KL divergence is between the policy before and after the update is yeah things like that. And then Ethan the calibar from Miele asks what is your median estimate for the arrival date of AGI? I think not too far away but I like I said I expect there to be a lot of fall starts I would say expect like like AI to be able to do better a better job than humans at most jobs that humans do now five years or so that's not all jobs but most jobs for a while we're going to discover things that AI isn't very good at and then where we want to keep humans in control so I think there'll be some kind of gradual process over the next 10 or 15 years. I've been curious about this I see that some RL work is patented but I could not find a TRPO or PPO in I could not find patents on these are those protected patent protected at all or how do you how do you think of intellectual property protection for that kind of work? I haven't ever looked looked into patenting anything and open AI hasn't either as far as I know I think the trend over time has been for people to take a patent scene machine like a machine learning algorithms last seriously there is this algorithm in computer vision called sift which is like this key point to detector and this was patented I think the the guy who patented it he probably made his university some money from the patent but in the end all it did was cause people a lot of annoyance because like the people people had to come up with alternative algorithms that like had a different acronym and weren't patented so like the open CV open source library would have like had to be careful about putting this algorithm in their library because of the patent risks so I think like these patents aren't the patent rights aren't exercise that much and I think big companies like Google will patent a lot of stuff for defensive reasons so if they get in some big legal dispute with another company it can be used as like one of the bargaining chips but I think I don't think anyone's going to like get sued for royalties for not yeah for not providing royalties for the use of some algorithm okay and then there's been a ton of work in RL of course since you first published TRPO and BBO but from your point of view if you had to pick a few highlights in terms of a few important milestones in in RL algorithms since PPO came out and by the way it's amazing that in 2022 we're still using PPO I think quite similar into it's original form is that right yeah pretty much yeah so so what would you say are the the biggest highlights for you in terms of our algorithm since since you did PPO yeah there's definitely been some interesting stuff so I think like a little after PPO there is TD3 and SAC and those are seem like pretty solid value-based methods that was one development that was interesting I think like yeah I thought museiro and it's and it's like elaborations we're also like efficient zero we're also pretty impressive that you can get that good sample efficiency both of the things I just mentioned were kind of well I don't want to say mostly on toy tasks or benchmarks because yeah I'm sure people are doing some real things with these algorithms yeah so I think that's that stuff was interesting I think like the whole recent interest in search of interest in the offline RL was also notable I would say the like the stuff we're doing with RL from human feedback is the kind of offline RL because we're like we have a fixed dataset and we have a fixed reward modeling dataset and we're training against that this is like offline RL but you're doing it in a different way you're using an on-policy algorithm with a reward model as opposed to maybe a more typical way to do offline RL would be use off-policy algorithm would that work here or would that not work here well we're doing here is kind of like model-based RL because the reward model is like a model of the like the unknown part of the system so like the unknown part of the system here is the is the human radar or the human it's not the outputting appending to your list of tokens so this is kind of like the work that's like takes a dynamics model at the environment and does some kind of just runs a policy grading algorithm against it so it's not like so the idea of running an online algorithm against a model that's kind of a well-established idea so I would say the papers that previously did this they were in a pretty different regime were in this regime of doing fairly small updates to the policy because we have this these awesome pre-trained models and we don't need to actually change them that much so yeah we use these online algorithms I'd say part of the reason why we can get away with using just an like an online algorithm is because we've been just looking at a band a contextual banded problem yeah because we only have like one time step like you get a query and you output a response and then that response gets a reward so if we had a like a multi-step process such as a conversation where you can't assign a reward until the very end of the conversation and or you had some I don't know some interaction with like some real-world system that's hard to simulate you wouldn't then it wouldn't be S-ray forward to you wouldn't be able to use exactly exactly the same methodology you would probably have to use a you would have to probably train a Q function or or something like that if you want if you want your method to be sample efficient you would probably have to do something slightly different I think we'll we'll have to we'll have to start exploring this at some point soon but so far we haven't at least I haven't seen any cases in like in the domain I'm looking at that require this but I expect it to to be relevant at some point so we had Arvind Shrinivas talking about decision transformer on the show recently that was a great episode and I see that you were also a co-author on the the 2016 RL squared paper I want to ask you what your thoughts about meta RL Arvind had some interesting things to say about maybe the idea that a transformer could kind of supersede the need for an RL algorithm altogether what do you expect from meta RL do expect will will still be using human authored RL algorithms in the future yeah that's a pretty bold statement that we don't need we won't need any RL algorithms anymore yeah since the RL squared paper people have been talking less about meta learning as far as I can tell actually because of sequence modeling has gotten so good like transformer let sequence models so that it's kind of queer the meta learning is just a special case of learning like it's it's just it's just like a certain kind of long context learning learning involving long episodes and maybe it shouldn't be treated that differently or are addressed with special algorithms I would say yeah the ideas like decision transformer are pretty interesting where you try to reduce RL to supervise learning it's still not like certain exactly how these compare and performance to RL like people have started to analyze that empirically and theoretically and I would say in practice sometimes sometimes it's better sometimes it's worse in my experience like it's been worse on the problems that I've that I've my colleagues and I have where we've tested it but yeah it's definitely an interesting direction Dr. John Schillman thank you so much for sharing your time in your insight with the talk our audience today thanks so much thank you
[ { "end": 6.24, "start": 0, "text": " The answer was affirmative. We can get an agent to basically use a set of tools that we give it." }, { "end": 12.48, "start": 6.24, "text": " In this case, the browsing commands like searchings. I would say I expect AI to be able to do better," }, { "end": 17.84, "start": 12.48, "text": " a better job than humans at most jobs that humans do now. Five years or so." }, { "end": 27.92, "start": 22.56, "text": " TalkAulRO podcast is all reinforcing learning all the time, featuring brilliant guests," }, { "end": 34.08, "start": 27.92, "text": " both research and applied. Join the conversation on Twitter at TalkRL podcast. I'm your host," }, { "end": 44.32, "start": 34.08, "text": " Robin Chohan. John Schulman is a co-founder of OpenAI and a researcher and engineer at OpenAI." }, { "end": 48.32000000000001, "start": 44.32, "text": " He is well known for major contributions to the field of reinforcement learning," }, { "end": 54.400000000000006, "start": 48.32000000000001, "text": " including the TRPO algorithm that's trust region policy optimization, GAE, generalized" }, { "end": 59.12, "start": 54.4, "text": " advanced estimation. Those are from his UC Berkeley dissertation and TRPO's" }, { "end": 65.03999999999999, "start": 59.12, "text": " descendant proximal policy optimization, or PPO. His current focus at OpenAI is on RL from" }, { "end": 68.16, "start": 65.03999999999999, "text": " human feedback. John, welcome to the show and thanks so much for being here." }, { "end": 71.75999999999999, "start": 68.16, "text": " Thanks a lot for having me. You were literally one of the first people I thought of when I started" }, { "end": 77.6, "start": 71.75999999999999, "text": " the show three years back. Thanks, I'm honored. It means a lot to me to have you here today. I definitely" }, { "end": 83.12, "start": 77.6, "text": " remember you were nuts and bolts of deep RL video back in the day and watching that multiple times" }, { "end": 88.88000000000001, "start": 83.12, "text": " and gaining a lot from that. You helped a generation of RL practitioners back then. By the way," }, { "end": 95.52000000000001, "start": 88.88000000000001, "text": " there's going to be a reboot of the nuts and bolts presentation. I got invited to give a talk" }, { "end": 101.92, "start": 95.52000000000001, "text": " at NERPS this year on it. I'll have to revamp the guidelines and everything. That'll be fun." }, { "end": 107.12, "start": 101.92, "text": " Oh, that's awesome. Can't wait for that. You were clearly one of the earlier pioneers in deep RL." }, { "end": 112.4, "start": 107.12, "text": " How did you choose to move your focus to RL from human feedback? Why is that an important problem?" }, { "end": 117.84, "start": 112.4, "text": " Why is that important to you? After GB3 was trained, I was blown away by how smart it was and I" }, { "end": 122.32000000000001, "start": 117.84, "text": " realized the next frontier was figuring out how to make language models actually useful. I'm still" }, { "end": 128.4, "start": 122.32000000000001, "text": " really interested in RL but solving RL benchmarks isn't the end of the story. To use your RL" }, { "end": 134.08, "start": 128.4, "text": " algorithm you need a reward function. Whereas the reward function come from in RL benchmarks," }, { "end": 138.16, "start": 134.08, "text": " you usually just code up the reward function. But if you're not in a simulator environment," }, { "end": 144.07999999999998, "start": 138.16, "text": " that doesn't work. What we have to do in any kind of real-world use case is have humans look at" }, { "end": 149.04, "start": 144.07999999999998, "text": " what the AI did and decide if it was good or bad. How exactly do you define this reward" }, { "end": 154, "start": 149.04, "text": " becomes a really challenging and important problem, especially as the tasks get harder to evaluate?" }, { "end": 159.2, "start": 154, "text": " Another angle on this is that language models are very smart but it's hard to get them to do" }, { "end": 164.24, "start": 159.2, "text": " anything useful. A big part of that is they're not necessarily trying to do what you want. They're" }, { "end": 168.88, "start": 164.24, "text": " just trying to imitate the training corpus. That means there's a big opportunity to improve" }, { "end": 173.84, "start": 168.88, "text": " them a lot by just giving them the right objective. That's what we can do by applying RL to these" }, { "end": 181.12, "start": 174.64000000000001, "text": " language models using human feedback to define the reward. Is human feedback harder or" }, { "end": 185.92000000000002, "start": 181.12, "text": " very different in some way than using a synthetic reward? There are a lot of new complications." }, { "end": 192.56, "start": 187.36, "text": " You have to collect a data set dynamically. You're always in the business of building data sets of" }, { "end": 199.12, "start": 192.56, "text": " human preferences. Often the data quality there matters more than various algorithmic details." }, { "end": 204.32, "start": 199.12, "text": " You also have to think a lot about exactly how you're giving the task to the human trainers" }, { "end": 208.32, "start": 204.32, "text": " and various other things that you wouldn't have thought about if you just had a programmatic reward" }, { "end": 213.44, "start": 208.32, "text": " function. Does the difference between human-raders or the noisiness of the reward signal cost any" }, { "end": 220.56, "start": 213.44, "text": " problems? I would say the noise definitely you need to be below some threshold of noise to learn" }, { "end": 226.64000000000001, "start": 220.56, "text": " anything. I think in general if you have a large noisy data set that can be as good as a smaller" }, { "end": 231.6, "start": 226.64000000000001, "text": " clean data set. Actually, noise isn't the thing that worries me the most. It's more that there are" }, { "end": 238, "start": 231.6, "text": " sometimes consistent biases that people have. For example, in settings like question answering" }, { "end": 244.4, "start": 238, "text": " or settings where you have a model writing some text, often people prefer longer answers. You end" }, { "end": 249.36, "start": 244.4, "text": " up with these very verbose answers. If you're not careful with the instructions that is. You can" }, { "end": 256.40000000000003, "start": 249.36, "text": " also instruct people the raiders to reward brevity. But without yet, if you're not careful you can" }, { "end": 262, "start": 257.04, "text": " incentivize the wrong kinds of behaviors. So let's move to some of your recent work. First up is" }, { "end": 268.40000000000003, "start": 262, "text": " WebGPT. Browser assisted question answering with human feedback. That's a Nekano at all with yourself" }, { "end": 273.84000000000003, "start": 268.40000000000003, "text": " as a co-author in 2021. Can you tell us what is the main idea of this paper? What is WebGPT?" }, { "end": 280.23999999999995, "start": 273.84, "text": " In WebGPT, we basically took our language models and we hooked them up to a web browser so they" }, { "end": 285.35999999999996, "start": 280.23999999999995, "text": " could retrieve information from the web. They can write an answer by summarizing the relevant pages" }, { "end": 290.08, "start": 285.35999999999996, "text": " from the web. That way if you're asking a question about current events or a question that requires" }, { "end": 295.35999999999996, "start": 290.08, "text": " some detailed scientific or technical knowledge, this AI can go out and look up the answer and" }, { "end": 301.67999999999995, "start": 295.35999999999996, "text": " with detailed citations to its sources. I would say there's two interesting points to this. One is" }, { "end": 306.24, "start": 301.68, "text": " we were exploring whether you could turn language models into a kind of agent. There's a lot of data" }, { "end": 310.32, "start": 306.24, "text": " on the web of different texts that people have written. But there's not a lot of data that shows" }, { "end": 316.24, "start": 310.32, "text": " how to actually do some multi-step process. So it's not that clear, uprearry whether you can get a" }, { "end": 321.68, "start": 316.24, "text": " language model to actually carry out some iterative process. We just have a lot of data like writing" }, { "end": 326.16, "start": 321.68, "text": " essays and having chats and so forth. So that was one thing we were exploring here and I think" }, { "end": 332.8, "start": 326.16, "text": " the answer was affirmative. We can get an agent to basically use a set of tools that we give it." }, { "end": 338.16, "start": 332.8, "text": " In this case the browsing commands like searchings, scroll link, click on links. The second" }, { "end": 344.24, "start": 338.16, "text": " theme of this paper was around truthfulness. I mean a big issue with language models is I mean" }, { "end": 349.76000000000005, "start": 344.24, "text": " they're not very reliable at giving you true information. They know a vastly superhuman amount. But" }, { "end": 354.64000000000004, "start": 349.76000000000005, "text": " if you prompt them in the wrong way they'll just output lots of plausible sounding nonsense. So" }, { "end": 359.84, "start": 354.64, "text": " how to fix that is a big research question or one of the biggest research questions in the" }, { "end": 364.32, "start": 359.84, "text": " world of language models. I think it's going to be challenging to fully fix it but I think a big" }, { "end": 370.32, "start": 364.32, "text": " part of the story involves retrieval and having models write answers that contain citations." }, { "end": 375.28, "start": 370.32, "text": " Citations to try trusted sources. So a person who's checking over the answer doesn't have to go and" }, { "end": 379.91999999999996, "start": 375.28, "text": " try to figure out where the model might have gotten this idea. They can go and directly look at" }, { "end": 387.6, "start": 379.92, "text": " the source and see if it supports the AI statement. With WebGBT we just wanted to see if we do give" }, { "end": 392.40000000000003, "start": 387.6, "text": " the language model a really flexible interface to the web. Can we have it answer hard questions" }, { "end": 398.32, "start": 392.40000000000003, "text": " truthfully using like with the help of all these citations. And it's actually really non-trivial" }, { "end": 403.76, "start": 398.32, "text": " because if you look at the data that we use the Reddit explain it like on five. The questions" }, { "end": 408.08000000000004, "start": 403.76, "text": " are really varied like some of them are about science, history, current events. Like our" }, { "end": 413.84, "start": 408.08, "text": " Raiders didn't necessarily know anything about these topics but still they had to judge the answers" }, { "end": 418.88, "start": 413.84, "text": " written detailed answers. So it would have been really hard to do it without the supporting" }, { "end": 425.12, "start": 418.88, "text": " citations. So we kind of validated that we could get good feedback in a hard domain like this" }, { "end": 431.12, "start": 425.12, "text": " with the help of citations. Can you talk about where the idea for WebGBT came from? Is that an idea" }, { "end": 435.12, "start": 431.12, "text": " you've had kicking around for a while or was it something that came up recently before the" }, { "end": 441.36, "start": 435.12, "text": " paper? How did that play out? Some of the ideas had been floating around like we thought that we" }, { "end": 447.12, "start": 441.36, "text": " actually had a project at OpenAI very early on a world called World of Bits. We were looking at" }, { "end": 452.16, "start": 447.12, "text": " controlling web browsers or doing tasks that involve tasks on the internet with the web browser" }, { "end": 458.4, "start": 452.16, "text": " but it was way too early at the time. So we kind of abandoned it for a few years. Actually we" }, { "end": 462.8, "start": 458.4, "text": " were trying to back then we were trying to do it with full visual input. So we thought yeah we could" }, { "end": 469.12, "start": 462.8, "text": " give some instructions to the agent like go and figure out figure out the address of this" }, { "end": 475.68, "start": 469.84000000000003, "text": " building or something. The agent would go and search the web or use Google Maps or whatever" }, { "end": 479.92, "start": 475.68, "text": " to figure out the answer. And we were trying to do this all in pixels that obviously didn't work" }, { "end": 486.16, "start": 479.92, "text": " very well. But now we have these great language models on the work on text data. We can also" }, { "end": 493.12, "start": 486.16, "text": " extract the text out of web pages to get most of the information. We can't really interact with" }, { "end": 498.16, "start": 493.12, "text": " a lot of dynamic websites. Yeah, where there's a lot of JavaScript and images and so forth. But" }, { "end": 504.64000000000004, "start": 498.16, "text": " as long as it's just browsing and reading text we're fine. So yeah we had good enough models and" }, { "end": 510.8, "start": 504.64000000000004, "text": " that made it kind of feasible to revisit this idea of using the internet as an environment." }, { "end": 516.32, "start": 510.8, "text": " So I would say that was one of the sources of inspiration that long-stinted, that long kind of" }, { "end": 522.4, "start": 516.32, "text": " thread about like using the internet as an environment. Another motivation was just after we got" }, { "end": 529.12, "start": 523.2, "text": " after we started playing with GPD3 we noticed that it had all these problems with factual" }, { "end": 535.52, "start": 529.12, "text": " accuracy and the reliability of the information it was giving us. So that kind of motivated doing" }, { "end": 540.4, "start": 535.52, "text": " more research on how to make language models more truthful. We were kind of brainstorming what to" }, { "end": 547.04, "start": 540.4, "text": " do there and we went through some docs and eventually decided that we wanted to try some question" }, { "end": 551.92, "start": 547.04, "text": " answering like using the web, looking up knowledge on the web to help answer questions. So actually" }, { "end": 556.24, "start": 551.92, "text": " the original version of the project used trivia questions. So there's another, there's this" }, { "end": 562.16, "start": 556.24, "text": " well-known data set trivia QA that has some basic trivia questions. So we first worked a little" }, { "end": 569.12, "start": 562.16, "text": " bit on that data set and tried to see if we could boost the model's accuracy by giving it web search" }, { "end": 576, "start": 569.12, "text": " and yeah that actually works quite straight, that worked pretty easily. So then we decided to move on" }, { "end": 582.72, "start": 576, "text": " to long-form question answering and so that gave us the, that was the project we ended up working on" }, { "end": 589.12, "start": 582.72, "text": " for a while. It seems like you use a few different data sets here and a number of different training" }, { "end": 594.96, "start": 589.12, "text": " methods. I'll just mention the last behavior cloning, reward modeling, reinforcement learning," }, { "end": 601.76, "start": 594.96, "text": " and rejection sampling. So we were using a fairly standard methodology which was actually adapted" }, { "end": 609.2, "start": 601.76, "text": " from previous work on RL from Human Preferences. So the pipeline is you first train a model with" }, { "end": 615.44, "start": 609.2, "text": " supervised learning where you you have human demonstrators show how to do the task, like show how to map" }, { "end": 620.8000000000001, "start": 615.44, "text": " from observations to actions. Yeah so that's the supervised learning or behavior cloning step then we" }, { "end": 628.7199999999999, "start": 620.8, "text": " train a reward model or preference model. It looks at two actions or two out trajectories and decides" }, { "end": 633.76, "start": 628.7199999999999, "text": " which one is better. In this case like in a question answering setting you're looking at two answers" }, { "end": 638.56, "start": 633.76, "text": " and deciding which answer is better and we use that to train a reward model that assigns higher score" }, { "end": 643.04, "start": 638.56, "text": " to the good answers than the bad ones. Then you do reinforcement learning against that reward function" }, { "end": 648.16, "start": 643.04, "text": " and of course you can iterate these last two steps. After you do a little RL now you're, you sort of" }, { "end": 653.4399999999999, "start": 648.16, "text": " exploited some of the flaws of the reward model like or some of the noise in the reward model and" }, { "end": 658.9599999999999, "start": 653.4399999999999, "text": " it's not necessarily accurate on your new distribution of data. You recollect more pairs of samples" }, { "end": 665.28, "start": 658.9599999999999, "text": " and refit this preference model and then you do another iteration of RL. So that's like that's" }, { "end": 670.9599999999999, "start": 665.28, "text": " the whole RL from Human Feedback Pipeline and there's this other idea called rejection sampling" }, { "end": 676.48, "start": 670.9599999999999, "text": " or best event sampling and in general you can do other kinds of search too where instead of doing" }, { "end": 681.52, "start": 676.48, "text": " RL once you have your reward model you can just search against that reward model so you can take" }, { "end": 687.6, "start": 681.52, "text": " a bunch of collect a bunch of samples and re-rank them with the reward model and take the best one" }, { "end": 694.08, "start": 687.6, "text": " as your action. Kind of like NPC. Yeah exactly. Yeah kind of depends exactly what setting you're in" }, { "end": 699.76, "start": 694.64, "text": " what you can do. If you're in a setting where there's some environment you're interacting with then" }, { "end": 705.44, "start": 699.76, "text": " you would have to simulate your, you would have to simulate the dynamics of your environment which" }, { "end": 711.84, "start": 705.44, "text": " yeah so that would look kind of like NPC. In our case we were the only thing we had to learn a model of" }, { "end": 718.24, "start": 711.84, "text": " was the human preference so like we're it's a question answering setting so it's really like a" }, { "end": 723.2, "start": 718.24, "text": " contextual banded problem so it's kind of straightforward to take a bunch of sample a bunch of" }, { "end": 730.8000000000001, "start": 723.2, "text": " actions where each action is a full answer and re-rank them or search against the search over answers." }, { "end": 736.4, "start": 730.8, "text": " So in terms of the action space was it the action space just a list of commands or is it still" }, { "end": 743.76, "start": 736.4, "text": " generating tokens like a regular generative mode? We were generating tokens. We had two phases of" }, { "end": 751.12, "start": 743.76, "text": " like in each episode of the RL task so there is first a browsing phase where where the model goes" }, { "end": 757.04, "start": 751.12, "text": " and it issues searches and clicks on things and quotes relevant information like if it sees" }, { "end": 761.92, "start": 757.04, "text": " something useful on the page it'll it'll quote it using this quote commands and then once it's" }, { "end": 769.28, "start": 762.8, "text": " browse it's done browsing it'll issue another command called end browsing and it'll write its" }, { "end": 775.68, "start": 769.28, "text": " answer that's also expressed in tokens but really we rolled this all into one big RL task where" }, { "end": 781.28, "start": 775.68, "text": " your episode involves browsing and writing out the answer and it's all one big RL episode." }, { "end": 785.28, "start": 781.28, "text": " Did you think this is going to work well or were you kind of surprised? At the very beginning of the" }, { "end": 790.72, "start": 785.28, "text": " project we didn't know if it was going to work or not. Like after we did the initial experiments" }, { "end": 797.68, "start": 790.72, "text": " with Trivia QA which actually didn't take that long to get running then it became pretty clear" }, { "end": 802.24, "start": 797.68, "text": " that it would work that the browsing part worked at least and we already know that we can get" }, { "end": 807.8399999999999, "start": 802.24, "text": " these models to write pretty good long form text with a bunch of if you give them a bunch of" }, { "end": 814.16, "start": 807.8399999999999, "text": " snippets of text that they they can cite. So I noticed the the the human raiders task was" }, { "end": 818.88, "start": 814.16, "text": " quite complicated as it was a long guide and there was many types of feedback that they were giving" }, { "end": 823.52, "start": 818.88, "text": " but in the end the paper said that only the final rating was used so I was just curious if you" }, { "end": 827.28, "start": 823.52, "text": " hadn't commented about that like why do you think maybe the model couldn't use that extra feedback" }, { "end": 833.12, "start": 827.28, "text": " whereas it was maybe just too much or not enough samples. Yeah that's been one frustrating" }, { "end": 840.0799999999999, "start": 833.8399999999999, "text": " finding so far in in that project and also some other projects we've had the same finding but" }, { "end": 845.76, "start": 840.08, "text": " you have your raiders go through this long process for each for each comparison they do where" }, { "end": 851.12, "start": 845.76, "text": " they're comparing a pair of answers and then you only use one bit of information from the whole" }, { "end": 855.84, "start": 851.12, "text": " from this whole process which might have taken like half an hour. It seems like it would be better if" }, { "end": 862.08, "start": 855.84, "text": " we if we were able to extract more information more about the process they went through in arriving" }, { "end": 867.0400000000001, "start": 862.08, "text": " at the answer. So we did collect all sorts of other information like we had them provide ratings" }, { "end": 873.4399999999999, "start": 867.04, "text": " along several different axes like coherence and factual accuracy and so forth but in the end" }, { "end": 880.24, "start": 874.3199999999999, "text": " we didn't really get much of a boost out of using any of this this other information so I'd say" }, { "end": 886.56, "start": 881.12, "text": " it seems like there's it should be possible to do better but unfortunately this methodology which" }, { "end": 893.68, "start": 886.56, "text": " seems kind of dumb so far it's hard to be and people have tried various other ideas for like how" }, { "end": 898, "start": 893.68, "text": " to use human feedback instead of you getting these preference scores there various other things you" }, { "end": 903.68, "start": 898, "text": " can do like you can have them right critiques and edit or maybe edit the responses. Yeah I think" }, { "end": 911.12, "start": 903.68, "text": " some of these things are are also promising but yeah this methodology of collecting preference data" }, { "end": 917.04, "start": 911.12, "text": " works well. Yeah I think it's it's still an open area of research. Oh yeah regarding the really" }, { "end": 922.64, "start": 917.04, "text": " long instructions. Yeah I think for any of these tasks there is a lot of subtlety in how to do the" }, { "end": 929.4399999999999, "start": 922.64, "text": " task properly and so we ended up adding more and more details of like what do you do in this situation" }, { "end": 933.76, "start": 929.4399999999999, "text": " and what do you do in that situation. I think it's starting to get pretty unwieldy with these really" }, { "end": 940.8, "start": 933.76, "text": " long instruction manuals so there's some promising ideas for how to address this like there's a" }, { "end": 946.4, "start": 940.8, "text": " paper from DeepMind recently Sparrow that used basically broke down the task and they trained" }, { "end": 952.3199999999999, "start": 947.04, "text": " they basically had people look at one aspect of the one aspect of the response at a time" }, { "end": 957.0400000000001, "start": 952.32, "text": " and and then they had a way of combining these different rule specific they would train a bunch" }, { "end": 961.6800000000001, "start": 957.0400000000001, "text": " of rule specific reward models and then combine them at the end. Yeah I think there's some other" }, { "end": 967.5200000000001, "start": 961.6800000000001, "text": " interesting ideas for how to how to make this process better. So I gather that from your answer" }, { "end": 972.6400000000001, "start": 967.5200000000001, "text": " about WebGPT and the whole idea of WebGPT is that you want the the language model type access to" }, { "end": 978.48, "start": 972.6400000000001, "text": " external knowledge but I wonder where you think the line should really be in terms of what a" }, { "end": 982.88, "start": 978.48, "text": " language model should know and what the language model should look up and maybe what the language" }, { "end": 987.6800000000001, "start": 982.88, "text": " model should not know or not purport to know. Do you have opinions about that? Yeah let's see" }, { "end": 994.16, "start": 988.4, "text": " like some people are advocating for very small language models that have like no external knowledge" }, { "end": 998.5600000000001, "start": 994.16, "text": " aside from language I guess would be the extreme position and then other people other people" }, { "end": 1002.5600000000001, "start": 998.5600000000001, "text": " talked about language models that just know everything as opposed to having an external knowledge" }, { "end": 1008.24, "start": 1002.5600000000001, "text": " source. There's some interesting questions there so I think it is a little hard to separate knowledge" }, { "end": 1015.6, "start": 1008.24, "text": " factual knowledge from understanding. So as humans we get by like not memorizing all sorts of" }, { "end": 1021.6800000000001, "start": 1016.4, "text": " facts and just knowing that we can look them up if needed. For working on a specific domain it is" }, { "end": 1028.88, "start": 1021.6800000000001, "text": " useful to like have a lot of facts internalized so that you can recall them very quickly and" }, { "end": 1034.24, "start": 1028.88, "text": " kind of combine them combine them in your head. So I wouldn't take an extreme position on either" }, { "end": 1041.44, "start": 1034.24, "text": " side I would say I think retrieval is going to be really useful just at the very least for" }, { "end": 1048.88, "start": 1041.44, "text": " current events but also I don't think we want to try to pack all human knowledge into the weights" }, { "end": 1054.72, "start": 1048.88, "text": " of a neural net. On the other hand I think people have had a lot of luck just scaling up models and" }, { "end": 1061.44, "start": 1055.68, "text": " like as they soak up more factual knowledge they also get better at reasoning and other things" }, { "end": 1068, "start": 1061.44, "text": " and I think I haven't seen any demonstrations of tiny models that just do lots of retrieval" }, { "end": 1073.68, "start": 1068, "text": " and save all their weights for reasoning. Yeah I just haven't seen any evidence of this" }, { "end": 1078.8, "start": 1073.68, "text": " or that or I haven't seen any successful attempts at making this. Let's move on to training" }, { "end": 1084.16, "start": 1078.8, "text": " language models to follow instructions with human feedback that was uyang et al and that was 2022" }, { "end": 1088.72, "start": 1084.16, "text": " with yourself as a co-author. Can you tell us the main idea with this paper? This is the instruct" }, { "end": 1094.64, "start": 1088.72, "text": " GPT paper. What does instruct GPT and what's going on here? Instruct GPT is a language model that's" }, { "end": 1099.44, "start": 1094.64, "text": " fine tuned to follow instructions and it's in fact the one that you can play with if you go to" }, { "end": 1105.84, "start": 1100.08, "text": " the open AI website you get a big text box and you can write some text and then press the button" }, { "end": 1112.24, "start": 1105.84, "text": " to generate a completion. So the idea here was I mean language models are pretty useful and you can" }, { "end": 1117.84, "start": 1112.96, "text": " sometimes get them to do what you want by prompting them just right. This idea of few shot" }, { "end": 1123.52, "start": 1117.84, "text": " prompting has been become pretty popular where you give a few examples like a few question answer" }, { "end": 1128.3999999999999, "start": 1123.52, "text": " examples and then if you ask another question it'll hopefully provide an answer in the same style." }, { "end": 1133.84, "start": 1128.3999999999999, "text": " So the idea yeah so if you can get language models to do great things with prompting but prompting" }, { "end": 1139.04, "start": 1133.84, "text": " is itself an arg and it's tricky to get right and it's also kind of not necessarily getting the" }, { "end": 1143.6799999999998, "start": 1139.04, "text": " best possible performance out of the model. If you just take a raw language model and you try to" }, { "end": 1148.4, "start": 1143.68, "text": " you try to talk to it like you ask it a question it probably it doesn't know that it should actually" }, { "end": 1154.0800000000002, "start": 1148.4, "text": " answer that question as well as possible. For all it knows you want it to give a joke answer or" }, { "end": 1160, "start": 1154.0800000000002, "text": " a riddle or something. Yeah so the idea of instruct GPT was let's make a kind of small change" }, { "end": 1164.4, "start": 1160, "text": " for our language models so that they're much easier to use. In particular we're going to train them" }, { "end": 1171.3600000000001, "start": 1164.4, "text": " to if you have a piece of text where there's an instruction the model will try to follow that" }, { "end": 1176.6399999999999, "start": 1171.36, "text": " instruction to the best of its abilities and pretty much anything can be an instruction like" }, { "end": 1183.04, "start": 1176.6399999999999, "text": " you can have a the instruction can be to continue a chat or it can be to like summarize like" }, { "end": 1190.8, "start": 1183.04, "text": " summarize this text or give me a list of names for my company that sells widgets. Yeah instructions" }, { "end": 1195.84, "start": 1190.8, "text": " can be anything and that makes that makes this kind of model very powerful. So that was kind of" }, { "end": 1199.76, "start": 1195.84, "text": " that's the idea of an instruction following model it's like a model that can do anything that" }, { "end": 1204.16, "start": 1199.76, "text": " you specify with an instruction and by the way I wasn't a core contributor to this work I was" }, { "end": 1212.16, "start": 1204.8, "text": " more involved with like getting the RL infrastructure and some of the RL training details" }, { "end": 1218.24, "start": 1212.16, "text": " like helping out with that that stuff. But anyway yeah what we did in this project was we ran this" }, { "end": 1224, "start": 1218.24, "text": " this whole methodology that I just described of RL from even preferences in this instruction" }, { "end": 1230.32, "start": 1224, "text": " following setting. So we did supervised fine tuning, collected preference data, trained a reward" }, { "end": 1236.72, "start": 1230.32, "text": " model and then did RL against that reward model and one interesting detail is actually whereas the" }, { "end": 1244.88, "start": 1236.72, "text": " original initial data was just collected using contractors. At a certain point we had the the API" }, { "end": 1252, "start": 1244.88, "text": " and it's got this I mean we have this playground on the website where this is where you the big" }, { "end": 1258.24, "start": 1252, "text": " text box where you can use the model. So we we took prompts that people that users had put into" }, { "end": 1264.56, "start": 1258.24, "text": " the into the playground and use those for training like both to collect preference data and to do RL." }, { "end": 1271.92, "start": 1264.56, "text": " So and this is like this is disclosed to users pretty prominently like when when people are using" }, { "end": 1276.88, "start": 1271.92, "text": " the playgrounds you get notified that your prompts might be used for the training and we're also" }, { "end": 1282.88, "start": 1276.88, "text": " careful to train in such a way that we don't memorize any information that was in in the prompts." }, { "end": 1288.5600000000002, "start": 1282.88, "text": " Like it and it explicit like we have a pretty like elaborate process for making sure there's no" }, { "end": 1295.0400000000002, "start": 1289.2800000000002, "text": " like private information being leaked into the model. But anyway yeah that's that's basically the" }, { "end": 1302.16, "start": 1295.7600000000002, "text": " experimental setup and the result was that it works like this methodology works quite well and you" }, { "end": 1308.64, "start": 1302.16, "text": " get a model that's vastly preferred to the base model on this distribution of of realistic prompts" }, { "end": 1314.4, "start": 1308.64, "text": " that people are giving the model often which contain instructions. So the raw like the the raw" }, { "end": 1321.68, "start": 1314.4, "text": " language models generally do a really bad job following instructions but this RL trained instruction" }, { "end": 1328.0800000000002, "start": 1321.68, "text": " following model is is a lot better and it's something like if you just calculate how much better" }, { "end": 1333.6, "start": 1328.08, "text": " it's something like it's as good as a model that's a hundred times bigger. That's a lot. Yeah." }, { "end": 1337.36, "start": 1333.6, "text": " You wanted the model to be truthful is that is that one of the criteria you wanted?" }, { "end": 1342.1599999999999, "start": 1337.36, "text": " Oh yeah truthfulness was one of the criteria. That seems amazing to me that truthfulness is" }, { "end": 1346.32, "start": 1342.1599999999999, "text": " something that I could learn by example like does that mean that truthfulness is somehow" }, { "end": 1351.04, "start": 1346.32, "text": " represented inside the network or because there's no external way for the model to confirm" }, { "end": 1355.76, "start": 1351.04, "text": " whether something is true or false. So how how might it know what is what is true without any" }, { "end": 1362.24, "start": 1355.76, "text": " external reference? I think to some extent there is some internal representation of truthfulness." }, { "end": 1367.12, "start": 1362.24, "text": " So I would say like one way to think about what language models do is they're trained to imitate" }, { "end": 1371.52, "start": 1367.12, "text": " the whole internet and the internet is written by lots of different people and has lots of different" }, { "end": 1379.04, "start": 1371.52, "text": " types of content from fiction to nonfiction to like like technical like detailed technical literature" }, { "end": 1386.3999999999999, "start": 1379.04, "text": " to like jokes and like forum posts whatever. So what the model is basically an ensemble of all" }, { "end": 1392.8799999999999, "start": 1386.3999999999999, "text": " these people who wrote stuff on the internet the raw pre-trained model. When you feed it a prompt" }, { "end": 1398.08, "start": 1392.8799999999999, "text": " what it's doing internally has to be something like figuring out who wrote the first wrote this prompt" }, { "end": 1403.04, "start": 1398.08, "text": " and then trying to continue in that style. So if it thinks it's reading just reading something on the" }, { "end": 1409.36, "start": 1403.04, "text": " Wall Street Betts Reddit it's going to continue on that style but if it thinks it's in the New" }, { "end": 1417.6, "start": 1409.36, "text": " York Times it's going to write in a very different way. So effectively the model must be like calculating" }, { "end": 1423.92, "start": 1417.6, "text": " somewhere like what style is this or what ensemble what's the like narrower ensemble of styles that" }, { "end": 1429.76, "start": 1423.92, "text": " I'm trying to imitate now. At the very least when you do some kind of when you do training like either" }, { "end": 1435.12, "start": 1429.76, "text": " supervised fine tuning or are all from human feedback you can at least like narrow down the set of" }, { "end": 1442.56, "start": 1435.12, "text": " styles the model is producing and try to imitate like the best or the best person in the training set" }, { "end": 1448, "start": 1442.56, "text": " or the best style in the training set and obviously best will differ a lot. So what we'll end up with" }, { "end": 1453.68, "start": 1448, "text": " will depend on our instructions. So if we if we tell I don't know we'll end up with something that" }, { "end": 1462.16, "start": 1453.68, "text": " has kind of safe like not too not too controversial but a bit corporate will end up with something" }, { "end": 1468.8, "start": 1462.16, "text": " like that depending on what our instructions are. So at the very least like we can kind of narrow" }, { "end": 1474, "start": 1468.8, "text": " in on one style instead of having the whole distribution of styles on the internet. I think probably" }, { "end": 1479.52, "start": 1474, "text": " there's more to it than that like we're not just learning about style but the model probably is" }, { "end": 1485.04, "start": 1479.52, "text": " like internally trying to determine if things are if statements are true or not like if the prompt" }, { "end": 1490.6399999999999, "start": 1485.04, "text": " contains incorrect information because that probably would be useful for determining a likely" }, { "end": 1495.6, "start": 1490.6399999999999, "text": " completion. I'm just talking about the raw pre-trained model so I think yeah I think just the" }, { "end": 1501.52, "start": 1495.6, "text": " objective of predicting next tokens probably gives you a lot it forces the model to like the" }, { "end": 1506.8799999999999, "start": 1501.52, "text": " determine if things are true or not. I think for our alfine tuning there's a lot more potential" }, { "end": 1513.1200000000001, "start": 1506.88, "text": " for the model to actually like try to output something truthful as opposed to trying to imitate" }, { "end": 1519.1200000000001, "start": 1513.1200000000001, "text": " a certain style though it's hard to I guess it would be hard to like determine if that's what the" }, { "end": 1524.4, "start": 1519.1200000000001, "text": " model is actually trying to do. So it's almost like the the prompt is guiding the model it's like" }, { "end": 1529.5200000000002, "start": 1524.4, "text": " what corner of the internet do we want to do we want to imitate here and maybe we want to" }, { "end": 1534.0800000000002, "start": 1529.5200000000002, "text": " instruct GPG wants to to focus more on the most more truthful corners of the internet" }, { "end": 1539.1999999999998, "start": 1534.08, "text": " something similar to that. Yeah I would hope so at least I think that's a pretty good though maybe" }, { "end": 1543.1999999999998, "start": 1539.1999999999998, "text": " a little simplistic picture of what's going on. At the very least we should be able to imitate" }, { "end": 1549.36, "start": 1543.1999999999998, "text": " the most truthful corner of the internet. So can you talk about a generalization and how does" }, { "end": 1554.56, "start": 1549.36, "text": " this type of model perform out of distribution? Like I guess if it seems questions that are a bit" }, { "end": 1558.3999999999999, "start": 1554.56, "text": " different than what it was trained on. What happens if we get a little bit away from the training" }, { "end": 1563.84, "start": 1558.3999999999999, "text": " data with the reward models? I mean language models in general generalize surprisingly well and" }, { "end": 1568.8, "start": 1563.84, "text": " I would say overall like these pre-trained models that are trained on super diverse data sets" }, { "end": 1573.84, "start": 1568.8, "text": " from the internet. They tend to generalize quite well or surprisingly well at least it's surprising" }, { "end": 1580.72, "start": 1573.84, "text": " to those of us who were around for the earlier days of machine learning when everything was" }, { "end": 1586.08, "start": 1580.72, "text": " trained from scratch and very fragile. For example if you ask if you provide an instruction in some" }, { "end": 1591.28, "start": 1586.08, "text": " other language even a even a fairly rare language it'll often do a decent job following the" }, { "end": 1597.36, "start": 1591.28, "text": " instruction even if there's zero data in the whole instruction following the training process" }, { "end": 1603.84, "start": 1597.92, "text": " that's in that language and that's just to carry over from the pre-training. So I think generalization" }, { "end": 1608.16, "start": 1603.84, "text": " yeah I think language models generalize quite well. So you asked about reward models I think one" }, { "end": 1614.08, "start": 1608.16, "text": " of the tricky pieces about RL from human feedback is how so you have this reward model and you're" }, { "end": 1618.6399999999999, "start": 1614.08, "text": " actually training against it meaning you're training your policy to have high reward and it's" }, { "end": 1623.6000000000001, "start": 1618.64, "text": " going to exploit the errors in the reward model so it's going to eventually find adversarial" }, { "end": 1628.72, "start": 1623.6000000000001, "text": " examples to the reward model. This is worse than kind of normal out of distribution behavior it's" }, { "end": 1634.0800000000002, "start": 1628.72, "text": " like targeted out of distribution examples so so there are definitely some challenges around" }, { "end": 1640.8000000000002, "start": 1634.8000000000002, "text": " getting reward models to generalize well or generalize as far as possible from the training set." }, { "end": 1645.6000000000001, "start": 1640.8000000000002, "text": " Can these types of agents tell us when they don't know something or is that a hard problem?" }, { "end": 1651.9199999999998, "start": 1645.6, "text": " I'd say sort of if you ask a question that's kind of in the core of the model's knowledge it will" }, { "end": 1656, "start": 1651.9199999999998, "text": " know know the answer and it'll know that it knows. By the way I'm talking about models like the" }, { "end": 1661.6, "start": 1656, "text": " for the instruct model if you ask it about something that's like very simple at the core of its" }, { "end": 1666.08, "start": 1661.6, "text": " knowledge it'll know if you there are certain things that it knows that it doesn't know like" }, { "end": 1672.8799999999999, "start": 1666.7199999999998, "text": " current events where it's been trained to know that it doesn't know certain things in real time but" }, { "end": 1678.16, "start": 1672.88, "text": " if you ask it about something that's kind of on the edge of its knowledge it's it's going to have a" }, { "end": 1682.96, "start": 1678.16, "text": " hard time it's it's necessarily going to be inaccurate. I mean there have been a couple papers" }, { "end": 1689.2800000000002, "start": 1683.7600000000002, "text": " about this question so there is in paper from Anthropic recently called language models" }, { "end": 1695.2800000000002, "start": 1689.2800000000002, "text": " mostly know what they know and there is also a paper from FHI and OpenAI called" }, { "end": 1700.48, "start": 1696.5600000000002, "text": " getting language models to express their uncertainty and words. These language" }, { "end": 1706.32, "start": 1700.48, "text": " models as well as a lot of other models in machine learning are training to maximize likelihood" }, { "end": 1711.6, "start": 1706.32, "text": " so maximize log-prob of data. You're already training them to always predict a distribution of" }, { "end": 1718.8, "start": 1711.6, "text": " outputs. So for language models given a prefix it's predicting a distribution over the next token." }, { "end": 1725.76, "start": 1718.8, "text": " These predictions for the next token like generally are pretty well calibrated but 80% if it puts 80%" }, { "end": 1731.6, "start": 1725.76, "text": " probability on something and you look at all the times when it puts 80% probability on something" }, { "end": 1736.72, "start": 1731.6, "text": " like it's right 80% of the time. Like that's just a result of the training objective. The training" }, { "end": 1742.96, "start": 1736.72, "text": " objective like strongly incentivizes the model to be calibrated meaning it has a reasonable" }, { "end": 1748.8, "start": 1742.96, "text": " estimate of its uncertainty. So at the single token level models definitely are calibrated." }, { "end": 1754.8, "start": 1748.8, "text": " The question is whether they're calibrated on whether this calibration extends to settings where" }, { "end": 1760.6399999999999, "start": 1754.8, "text": " they are generating multi-token outputs or whether they can judge the correctness of some" }, { "end": 1766, "start": 1760.6399999999999, "text": " multi-token statement. So I would say since models are calibrated at the single token level" }, { "end": 1772.6399999999999, "start": 1766.56, "text": " they I think they definitely have the information to be calibrated in these other settings." }, { "end": 1778.48, "start": 1772.6399999999999, "text": " So that's why I think the problem of models knowing what they know isn't actually that hard" }, { "end": 1783.9199999999998, "start": 1778.48, "text": " or at least getting a model to express its uncertainty pretty much as well as a human does" }, { "end": 1788.88, "start": 1783.92, "text": " doesn't feel like an insurmountable problem but there's some practical difficulties to getting" }, { "end": 1793.92, "start": 1788.88, "text": " getting there. People use the phrase AI alignment in different ways. Can you talk about how you see" }, { "end": 1800.0800000000002, "start": 1793.92, "text": " alignment in your work on Aral from human feedback? I think of alignment mostly as the problem of" }, { "end": 1805.28, "start": 1800.0800000000002, "text": " getting the model to try to do the right thing so we can kind of make a distinction between" }, { "end": 1811.28, "start": 1805.92, "text": " what the model is capable of doing. Like if you just take a raw language model and you ask" }, { "end": 1815.76, "start": 1811.28, "text": " it a question like I said before it doesn't know that you actually wanted to give the correct answer" }, { "end": 1821.68, "start": 1815.76, "text": " as opposed to. It might think someone who is not very knowledgeable is answering. By doing some" }, { "end": 1826, "start": 1821.68, "text": " extra training we can get the model to actually try to do the right thing and so I would say that" }, { "end": 1832.16, "start": 1826, "text": " that's the main goal of alignment. So there was an open AI blog post recently that talked about" }, { "end": 1840.24, "start": 1832.16, "text": " the sequence in alignment. One was training AI systems using human feedback to use it training AI" }, { "end": 1846, "start": 1840.24, "text": " systems to assist human evaluation and three training AI systems to do alignment research." }, { "end": 1852, "start": 1846, "text": " So is your current work mostly about this first item and when and how do you see us getting to" }, { "end": 1858.4, "start": 1852, "text": " these other stages? I'm doing some work now on number two training AI systems to assist human feedback." }, { "end": 1865.04, "start": 1858.4, "text": " I think that's sort of becomes increasingly necessary as you start trying to get the systems" }, { "end": 1869.44, "start": 1865.04, "text": " to solve harder and harder problems. When you have models that are kind of very below human level" }, { "end": 1875.6000000000001, "start": 1869.44, "text": " or maybe at human level at a certain task it's pretty straightforward to supervise them. But" }, { "end": 1880.4, "start": 1875.6000000000001, "text": " once they're doing things that are very hard or doing things that require a lot of diverse" }, { "end": 1886.96, "start": 1880.4, "text": " technical knowledge it becomes pretty hard to provide a useful supervision signal. So we have to" }, { "end": 1893.3600000000001, "start": 1886.96, "text": " start doing things like one model writes an answer to do a question and then another model provides" }, { "end": 1900.9599999999998, "start": 1893.36, "text": " a critique of that answer points out some flaws and then the human only has to judge the first answer" }, { "end": 1906.7199999999998, "start": 1900.9599999999998, "text": " after looking at the critique meaning basically the critique helps the human assess the answer. So I" }, { "end": 1912.1599999999999, "start": 1906.7199999999998, "text": " think like that kind of idea is starting to become pretty relevant. A colleague's an I are exploring" }, { "end": 1917.28, "start": 1912.1599999999999, "text": " that kind of idea now. As for assisting alignment research there's some other work at open AI that's" }, { "end": 1923.1999999999998, "start": 1917.28, "text": " starting to explore this. It's also that sort of the for this down the road. So I saw Stuart Russell" }, { "end": 1928.96, "start": 1923.2, "text": " was on your PhD committee and I really enjoyed his book Human Compatible. I wonder if you share" }, { "end": 1933.04, "start": 1928.96, "text": " the idea mentioned in the book that the standard RL framing with this fixed reward signal" }, { "end": 1939.8400000000001, "start": 1933.8400000000001, "text": " is problematic and that agents powerful agents should try to do what we want and maintain some" }, { "end": 1945.68, "start": 1939.8400000000001, "text": " uncertainty about what it is we want and the agents that are too certain will be problematic." }, { "end": 1952.56, "start": 1945.68, "text": " What do you have any thoughts on that idea? I totally agree with that idea. So I think first it's" }, { "end": 1959.36, "start": 1952.56, "text": " really hard to write down a simple reward function that actually captures what we want or what any" }, { "end": 1964.96, "start": 1959.36, "text": " any particular person wants. I can say I want a little more of this or a little more of that but" }, { "end": 1971.2, "start": 1965.6, "text": " you wouldn't want to take that to the extreme. If we build agents that try to cater to our to our" }, { "end": 1978.6399999999999, "start": 1971.2, "text": " wishes we should make sure they're like they have a lot of they have uncertainty about what we" }, { "end": 1984.64, "start": 1978.64, "text": " want or what we value and that that'll also cause them to be a little more cautious and say" }, { "end": 1991.92, "start": 1984.64, "text": " not disturb anything that might be important to us. So yeah I agree with that like Stuart Russell" }, { "end": 1998.3200000000002, "start": 1991.92, "text": " gave a very good like problem definition of what we want AI to do like we want it to basically" }, { "end": 2003.6000000000001, "start": 1998.3200000000002, "text": " we want to jointly like play this game where AI is the AI is trying to figure out what we want" }, { "end": 2008.4, "start": 2003.6000000000001, "text": " and then trying to do that but simultaneously maintaining some uncertainty about what we want." }, { "end": 2013.0400000000002, "start": 2008.4, "text": " I would say if you you start to look at how to get that in practice it actually looks quite a bit" }, { "end": 2019.76, "start": 2013.0400000000002, "text": " like the kind of RL from human feedback that we're working on at OpenAI and others are working on" }, { "end": 2027.8400000000001, "start": 2019.76, "text": " other places. I think yeah I think I see what we're doing as a practical implementation of getting" }, { "end": 2033.2, "start": 2027.8400000000001, "text": " towards this behavior that Russell have described. Do you think of a AGI as an abstract goal or" }, { "end": 2037.44, "start": 2033.2, "text": " are we going to see a model come out one day and people are going to say oh that's the first AGI" }, { "end": 2044, "start": 2037.44, "text": " model like what does it have to do for people to say that? I think people will say that many times" }, { "end": 2049.52, "start": 2044.72, "text": " then realize that it doesn't quite do everything that you want. I think we're going to have a lot of" }, { "end": 2055.44, "start": 2049.52, "text": " like a long series of models that are that are superhuman at most things or at a certain class of" }, { "end": 2064.08, "start": 2055.44, "text": " things but they also have some failure modes and weaknesses. Like I expect us to like see multiple" }, { "end": 2070.96, "start": 2064.08, "text": " models that are proclaimed as AGI and then only after interacting with it a while you do realize" }, { "end": 2078, "start": 2070.96, "text": " it's not quite there. What would you say is the relationship between AGI and RL and AGI and" }, { "end": 2084.96, "start": 2078, "text": " these large language models? How do those concepts fit together? I would say that RL is a useful" }, { "end": 2090.96, "start": 2084.96, "text": " like component of training AGI or an almost essential component. The thing RL lets you do is it" }, { "end": 2098.48, "start": 2090.96, "text": " lets you optimize any objective for the agents. Any objective that is a function of the agents" }, { "end": 2105.04, "start": 2098.48, "text": " behavior. So with pre-training like what we do for language models you're kind of choosing an" }, { "end": 2111.04, "start": 2105.04, "text": " objective that lets us do something with all the training day we have which is all this internet" }, { "end": 2116.4, "start": 2111.04, "text": " text. So we choose this maximum likelihood objective which is basically the only or not the" }, { "end": 2122.4, "start": 2116.4, "text": " only thing but it's like a sensible way to absorb all this knowledge. But then if we really want to" }, { "end": 2128.32, "start": 2122.4, "text": " optimize the agents behavior for a specific objective RL is kind of the only framework that lets you" }, { "end": 2133.12, "start": 2128.32, "text": " do that. Okay John we have a few questions from the audience and I'm just going to pick the two" }, { "end": 2139.36, "start": 2133.12, "text": " that have the highest score in terms of Twitter likes. So the first is from Eric Chang VP of AI" }, { "end": 2144.48, "start": 2139.36, "text": " at a Hello Di Robotics. He asked RL distributions are non-stationary making it hard to reason about" }, { "end": 2149.92, "start": 2144.48, "text": " PPO losses and how that relates to return or generalization. Are there any intermediate plots" }, { "end": 2156, "start": 2149.92, "text": " and visualizations you like to generate to debug or incrementally build up a large scale RL system?" }, { "end": 2163.2, "start": 2156, "text": " Yeah there are definitely some stats that I look at so I will be I'll talk about this in the nuts" }, { "end": 2172.2400000000002, "start": 2163.2, "text": " and bolts like reboot waited a year but I'd say things like you're looking at the explained" }, { "end": 2177.3599999999997, "start": 2172.24, "text": " variants of the value function and looking at the like how many samples are getting clipped in" }, { "end": 2183.3599999999997, "start": 2177.3599999999997, "text": " PPO and what the KL between the what what the KL divergence is between the policy before and after" }, { "end": 2191.3599999999997, "start": 2183.3599999999997, "text": " the update is yeah things like that. And then Ethan the calibar from Miele asks what is your median" }, { "end": 2198, "start": 2191.3599999999997, "text": " estimate for the arrival date of AGI? I think not too far away but I like I said I expect there to" }, { "end": 2205.12, "start": 2198, "text": " be a lot of fall starts I would say expect like like AI to be able to do better a better job than" }, { "end": 2211.36, "start": 2205.12, "text": " humans at most jobs that humans do now five years or so that's not all jobs but most jobs for a while" }, { "end": 2215.84, "start": 2211.36, "text": " we're going to discover things that AI isn't very good at and then where we want to keep humans in" }, { "end": 2221.12, "start": 2215.84, "text": " control so I think there'll be some kind of gradual process over the next 10 or 15 years." }, { "end": 2227.2, "start": 2221.12, "text": " I've been curious about this I see that some RL work is patented but I could not find a TRPO or" }, { "end": 2234.3999999999996, "start": 2227.2, "text": " PPO in I could not find patents on these are those protected patent protected at all or how do you" }, { "end": 2240.56, "start": 2234.3999999999996, "text": " how do you think of intellectual property protection for that kind of work? I haven't ever looked" }, { "end": 2246.16, "start": 2240.56, "text": " looked into patenting anything and open AI hasn't either as far as I know I think the trend over time" }, { "end": 2251.12, "start": 2246.16, "text": " has been for people to take a patent scene machine like a machine learning algorithms last" }, { "end": 2256, "start": 2251.12, "text": " seriously there is this algorithm in computer vision called sift which is like this key point" }, { "end": 2262.88, "start": 2256, "text": " to detector and this was patented I think the the guy who patented it he probably made his" }, { "end": 2268.88, "start": 2262.88, "text": " university some money from the patent but in the end all it did was cause people a lot of annoyance" }, { "end": 2275.28, "start": 2268.88, "text": " because like the people people had to come up with alternative algorithms that like had a" }, { "end": 2282.72, "start": 2275.28, "text": " different acronym and weren't patented so like the open CV open source library would have like" }, { "end": 2288.08, "start": 2282.72, "text": " had to be careful about putting this algorithm in their library because of the patent risks so" }, { "end": 2294.7999999999997, "start": 2288.64, "text": " I think like these patents aren't the patent rights aren't exercise that much and I think big" }, { "end": 2301.2, "start": 2294.7999999999997, "text": " companies like Google will patent a lot of stuff for defensive reasons so if they get in some big" }, { "end": 2307.52, "start": 2301.2, "text": " legal dispute with another company it can be used as like one of the bargaining chips but I think" }, { "end": 2315.12, "start": 2307.52, "text": " I don't think anyone's going to like get sued for royalties for not yeah for not providing royalties" }, { "end": 2320.24, "start": 2315.12, "text": " for the use of some algorithm okay and then there's been a ton of work in RL of course since you" }, { "end": 2326.4, "start": 2320.24, "text": " first published TRPO and BBO but from your point of view if you had to pick a few highlights in" }, { "end": 2333.52, "start": 2326.4, "text": " terms of a few important milestones in in RL algorithms since PPO came out and by the way it's" }, { "end": 2341.2, "start": 2333.52, "text": " amazing that in 2022 we're still using PPO I think quite similar into it's original form is that" }, { "end": 2347.44, "start": 2341.2, "text": " right yeah pretty much yeah so so what would you say are the the biggest highlights for you" }, { "end": 2352.96, "start": 2348.4, "text": " in terms of our algorithm since since you did PPO yeah there's definitely been some interesting" }, { "end": 2361.6, "start": 2352.96, "text": " stuff so I think like a little after PPO there is TD3 and SAC and those are seem like pretty solid" }, { "end": 2366.96, "start": 2361.6, "text": " value-based methods that was one development that was interesting I think like yeah I thought" }, { "end": 2375.36, "start": 2366.96, "text": " museiro and it's and it's like elaborations we're also like efficient zero we're also pretty" }, { "end": 2380.72, "start": 2375.36, "text": " impressive that you can get that good sample efficiency both of the things I just mentioned were" }, { "end": 2386.7999999999997, "start": 2380.72, "text": " kind of well I don't want to say mostly on toy tasks or benchmarks because yeah I'm sure people" }, { "end": 2391.92, "start": 2386.8, "text": " are doing some real things with these algorithms yeah so I think that's that stuff was interesting" }, { "end": 2400.2400000000002, "start": 2391.92, "text": " I think like the whole recent interest in search of interest in the offline RL was also notable" }, { "end": 2405.28, "start": 2400.2400000000002, "text": " I would say the like the stuff we're doing with RL from human feedback is the kind of offline RL" }, { "end": 2411.76, "start": 2405.92, "text": " because we're like we have a fixed dataset and we have a fixed reward modeling dataset and we're" }, { "end": 2416.2400000000002, "start": 2411.76, "text": " training against that this is like offline RL but you're doing it in a different way you're using" }, { "end": 2423.2, "start": 2416.24, "text": " an on-policy algorithm with a reward model as opposed to maybe a more typical way to do offline RL" }, { "end": 2427.9199999999996, "start": 2423.2, "text": " would be use off-policy algorithm would that work here or would that not work here well we're" }, { "end": 2434.3999999999996, "start": 2427.9199999999996, "text": " doing here is kind of like model-based RL because the reward model is like a model of the like the" }, { "end": 2440.56, "start": 2434.3999999999996, "text": " unknown part of the system so like the unknown part of the system here is the is the human" }, { "end": 2448.08, "start": 2440.56, "text": " radar or the human it's not the outputting appending to your list of tokens so this is kind of like" }, { "end": 2454.48, "start": 2448.08, "text": " the work that's like takes a dynamics model at the environment and does some kind of just runs a" }, { "end": 2459.44, "start": 2454.48, "text": " policy grading algorithm against it so it's not like so the idea of running an online algorithm" }, { "end": 2465.52, "start": 2460.08, "text": " against a model that's kind of a well-established idea so I would say the papers that previously" }, { "end": 2470.72, "start": 2465.52, "text": " did this they were in a pretty different regime were in this regime of doing fairly small" }, { "end": 2476.24, "start": 2470.72, "text": " updates to the policy because we have this these awesome pre-trained models and we don't need to" }, { "end": 2482.56, "start": 2476.24, "text": " actually change them that much so yeah we use these online algorithms I'd say part of the reason" }, { "end": 2490.4, "start": 2482.56, "text": " why we can get away with using just an like an online algorithm is because we've been just looking" }, { "end": 2495.52, "start": 2490.4, "text": " at a band a contextual banded problem yeah because we only have like one time step like you get" }, { "end": 2501.52, "start": 2495.52, "text": " a query and you output a response and then that response gets a reward so if we had a like a" }, { "end": 2509.04, "start": 2501.52, "text": " multi-step process such as a conversation where you can't assign a reward until the very end of" }, { "end": 2516, "start": 2509.04, "text": " the conversation and or you had some I don't know some interaction with like some real-world" }, { "end": 2520.64, "start": 2516, "text": " system that's hard to simulate you wouldn't then it wouldn't be S-ray forward to you wouldn't" }, { "end": 2526.08, "start": 2520.64, "text": " be able to use exactly exactly the same methodology you would probably have to use a you would have" }, { "end": 2532.24, "start": 2526.08, "text": " to probably train a Q function or or something like that if you want if you want your method to be" }, { "end": 2536.4, "start": 2532.24, "text": " sample efficient you would probably have to do something slightly different I think we'll we'll" }, { "end": 2542.88, "start": 2536.4, "text": " have to we'll have to start exploring this at some point soon but so far we haven't at least" }, { "end": 2550.48, "start": 2542.88, "text": " I haven't seen any cases in like in the domain I'm looking at that require this but I expect it to" }, { "end": 2556.96, "start": 2551.44, "text": " to be relevant at some point so we had Arvind Shrinivas talking about decision transformer" }, { "end": 2561.76, "start": 2556.96, "text": " on the show recently that was a great episode and I see that you were also a co-author on the" }, { "end": 2565.92, "start": 2561.76, "text": " the 2016 RL squared paper I want to ask you what your thoughts about meta RL" }, { "end": 2571.28, "start": 2566.6400000000003, "text": " Arvind had some interesting things to say about maybe the idea that a transformer could kind of" }, { "end": 2575.92, "start": 2571.28, "text": " supersede the need for an RL algorithm altogether what do you expect from meta RL" }, { "end": 2581.36, "start": 2575.92, "text": " do expect will will still be using human authored RL algorithms in the future yeah that's a pretty" }, { "end": 2586.6400000000003, "start": 2581.36, "text": " bold statement that we don't need we won't need any RL algorithms anymore yeah since the RL squared" }, { "end": 2593.0400000000004, "start": 2586.6400000000003, "text": " paper people have been talking less about meta learning as far as I can tell actually because" }, { "end": 2599.28, "start": 2593.0400000000004, "text": " of sequence modeling has gotten so good like transformer let sequence models so that it's kind" }, { "end": 2604.2400000000002, "start": 2599.28, "text": " of queer the meta learning is just a special case of learning like it's it's just it's just like" }, { "end": 2610.0800000000004, "start": 2604.2400000000002, "text": " a certain kind of long context learning learning involving long episodes and maybe it shouldn't be" }, { "end": 2615.36, "start": 2610.0800000000004, "text": " treated that differently or are addressed with special algorithms I would say yeah the ideas like" }, { "end": 2620.6400000000003, "start": 2615.36, "text": " decision transformer are pretty interesting where you try to reduce RL to supervise learning it's" }, { "end": 2626.0800000000004, "start": 2620.6400000000003, "text": " still not like certain exactly how these compare and performance to RL like people have started to" }, { "end": 2633.04, "start": 2626.08, "text": " analyze that empirically and theoretically and I would say in practice sometimes sometimes it's" }, { "end": 2638.48, "start": 2633.04, "text": " better sometimes it's worse in my experience like it's been worse on the problems that I've" }, { "end": 2644.56, "start": 2638.48, "text": " that I've my colleagues and I have where we've tested it but yeah it's definitely an interesting" }, { "end": 2649.12, "start": 2644.56, "text": " direction Dr. John Schillman thank you so much for sharing your time in your insight with the" }, { "end": 2660.08, "start": 2649.12, "text": " talk our audience today thanks so much thank you" } ]
Sven Mika
"Sven Mika of Anyscale on RLlib present and future, Ray and Ray Summit 2022, applied RL in Games / F(...TRUNCATED)
https://media.transistor…14b.mp3?src=site
" There's a rise in interest in our finance. We have JPM for example, as well as other companies tha(...TRUNCATED)
[{"end":5.72,"start":0.0,"text":" There's a rise in interest in our finance. We have JPM for example(...TRUNCATED)
Karol Hausman and Fei Xia
"Karol Hausman and Fei Xia of Google Research on newly updated (PaLM-)SayCan, Inner Monologue, robot(...TRUNCATED)
https://media.transistor…17f.mp3?src=site
" This type of emergent capability is super interesting for us to see and super exciting for us by u(...TRUNCATED)
[{"end":7.12,"start":0.0,"text":" This type of emergent capability is super interesting for us to se(...TRUNCATED)
Sai Krishna Gottipati
"Sai Krishna Gottipati of AI Redefined on RL for synthesizable drug discovery, Multi-Teacher Self-Pl(...TRUNCATED)
https://media.transistor…80a.mp3?src=site
" TalkRL podcast is all reinforced in learning all the time, featuring brilliant guests both researc(...TRUNCATED)
[{"end":10.32,"start":0.0,"text":" TalkRL podcast is all reinforced in learning all the time, featur(...TRUNCATED)
Aravind Srinivas 2
"Aravind Srinivas, Research Scientist at OpenAI, returns to talk Decision Transformer, VideoGPT, cho(...TRUNCATED)
https://media.transistor…583.mp3?src=site
" TalkRL podcast is all reinforced in learning all the time, featuring brilliant guests both researc(...TRUNCATED)
[{"end":11.46,"start":0.0,"text":" TalkRL podcast is all reinforced in learning all the time, featur(...TRUNCATED)
Rohin Shah
"DeepMind Research Scientist Dr. Rohin Shah on Value Alignment, Learning from Human feedback, Assist(...TRUNCATED)
https://media.transistor…ae3.mp3?src=site
" TalkRL podcast is all reinforcing learning all the time, featuring brilliant guests, both research(...TRUNCATED)
[{"end":11.0,"start":0.0,"text":" TalkRL podcast is all reinforcing learning all the time, featuring(...TRUNCATED)
Jordan Terry
"Jordan Terry on maintaining Gym and PettingZoo, hardware accelerated environments and the future of(...TRUNCATED)
https://media.transistor…615.mp3?src=site
" TalkRL podcast is all reinforced in learning all the time featuring brilliant guests both research(...TRUNCATED)
[{"end":10.72,"start":0.0,"text":" TalkRL podcast is all reinforced in learning all the time featuri(...TRUNCATED)
Robert Lange
"Robert Lange on learning vs hard-coding, meta-RL, Lottery Tickets and Minimal Task Representations,(...TRUNCATED)
https://media.transistor…dc1.mp3?src=site
" Robert Tiacolange is a PhD student working at the Technical University of Berlin. Thanks so much f(...TRUNCATED)
[{"end":8.0,"start":0.0,"text":" Robert Tiacolange is a PhD student working at the Technical Univers(...TRUNCATED)
NeurIPS 2021 Political Economy of Reinforcement Learning Systems (PERLS) Workshop
Dr. Thomas Gilbert and Dr. Mark Nitzberg on the upcoming PERLS Workshop @ NeurIPS 2021
https://media.transistor…61c.mp3?src=site
" Hi listeners, today we're going to hear about the upcoming Pearls Workshop. That is the political (...TRUNCATED)
[{"end":5.5600000000000005,"start":0.0,"text":" Hi listeners, today we're going to hear about the up(...TRUNCATED)
Amy Zhang
"Amy Zhang shares her work on Invariant Causal Prediction for Block MDPs, Multi-Task Reinforcement L(...TRUNCATED)
https://media.transistor…c6c.mp3?src=site
" This is TalkArail Podcast. All reinforcement learning, all the time. Interviews of brilliant folks(...TRUNCATED)
[{"end":11.0,"start":0.0,"text":" This is TalkArail Podcast."},{"end":13.84,"start":11.0,"text":" Al(...TRUNCATED)

Dataset Card for "talkrl-podcast"

This dataset is sourced from the TalkRL Podcast website and contains English transcripts of wonderful TalkRL podcast episodes. The transcripts were generated using OpenAI's base Whisper model

Downloads last month
0
Edit dataset card