video_id
stringlengths
11
11
title
stringlengths
0
100
text
stringlengths
513
648
start_timestamp
stringlengths
8
8
end_timestamp
stringlengths
8
8
start_second
stringlengths
1
5
end_second
stringlengths
2
5
url
stringlengths
48
52
thumbnail
stringlengths
0
52
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
into two versions of that same image and when I then embed them linear neural network the symbol networking the left hand the right upper channels then the embedding should be close as measured with some cosine similarity and of course over another image that I embed then I'm betting should be far away and those are the negatives in the denominator so for more details and that of course go back to our self supervised learning lectures from a few weeks ago what's important here is done this is a very simple idea it's just saying turn an
01:30:59
01:31:32
5459
5492
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5459s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
image into two images and the embedding should be close take a different image it's embedding should be far from this and what's surprising about this even though it's relatively simple it enables representation learning that then on top of that all you need is a linear classifier to get a really good image that classification performance and they actually looked at many types of augmentations cropping cut out color surveilled filter Norris blur rotate and what they found is that crop matters the most and color matters quite a bit too
01:31:32
01:32:09
5492
5529
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5492s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
but really cropping is the one that matters the most so now the curl curl looks like a nice representation learning with RL so what did we do here we have our a replay buffer on which we normally would just run reinforcement learn and so we have our replay buffer we take on my observations now to replay buffer since this is a dynamical system we need to look at a sequence of frame and consider that a single observation otherwise we cannot observe philosophy and a single frame acknowledge their velocity so we'll have a stack of sequential frames
01:32:09
01:32:43
5529
5563
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5529s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
that we together consider a single observation let's pack of frames then gets undergoes did augmentation to that augmentation in this case two different crops then one goes into the query encoder I'm gonna go key and predators could actually be the same or different you can choose and then ultimately you do two things with this in the top path it just goes to the reinforcements in law so if you run B for PG again or you run soft actor critic you run PPO and so forth that happens along the top path so what it means is along the top
01:32:43
01:33:21
5563
5601
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5563s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
path you run your standard our logarithm the only thing that's changed is that we take this rare replay buffer you do some data on English now in the bottom path you have another data images in the same frame' you have a contrast of loss so essentially the same loss not exactly same details but at a high level same as we saw in the Sinclair slide okay so a couple of things that were important to make this work Sinclair uses a cosine laws what we found is that having a a weighting matrix here between the key McQuarrie is Ashley Borden then we'll
01:33:21
01:34:02
5601
5642
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5601s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
see in the red curve the bilinear and waiting is us secretly outperform using just cosine the other thing we notice is that using momentum Indian and one of the encoder pass is very important to which was actually dawn actually saw herself as learning lecture in the moko work we also have momentum and one of their past same thing was important here again big difference so once we do that we can see that curl outperformance both prior model based and model free state-of-the-art methods so we look at here is medians course on deep mind
01:34:02
01:34:37
5642
5677
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5642s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
control one hundred kid you might control firing the kiss it's hundred can steps are firing the kid steps and so it's checking really can you learn to bounce it's not about after one hundred million steps where you ask is about 100 thousand firing down steps where are you at and so we see here after winter and kid steps from state ship access to state this is how far you get curl on one hundred K steps is a little bit behind what you can do from state but no family K stuff is actually all the way there so we see that we can learn almost
01:34:37
01:35:09
5677
5709
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5677s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
as well from pixels as from stayed with curl for Kepner prior methods that also tried to learn from pixels we see that they consistently we're not doing nearly as well after firing the kid steps and Sam with after hundred clear steps so both after hundred K M cavity steps curl up of homes or prior our elephant pixels on deep mind control Sweden and after getting very close to take this learning here we have the learning curves in gray we see state based learning and red we see curl we see that in many of these red is matching gray
01:35:09
01:35:50
5709
5750
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5709s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
there are a few exceptions within most of them red matches fray meaning that with curl are elfin pixels can be almost as efficient as RL fringe state at least for these deep mind control tasks and here we'll look at you know a table of results you see in boldface the winner compared with all prior methods favorite methods of learning from pixels and you see that consistently curl outperforms the other methods for the tyranny and hierarchy not just an average but on essentially all of the individual tasks except for
01:35:50
01:36:28
5750
5788
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5750s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
that no one here and one here dark public iris with curl doesn't learn as fast and we look at the details what happens there these are environments where the dynamics is fairly complex so this requires some more research with our hypothesis here has been that in those environments learning from pixels is particularly difficult because if you just look at the pixels the dynamic is not well captured in the sequence of frames you get to see for example if contact forces matter a lot and it's you can easily read those off from pixels
01:36:28
01:37:05
5788
5825
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5788s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
and so having access to state makes a pretty big difference in terms of being able to learn looking at the entire benchmark we are looking at median human on normalized score across 26 Atari games at 100k frames and we see that compared to Paris today our rainbow rainbow dqm simple and well at least rainbow DQ and simple and rainbow DQ and curl seen every out performs prior and state-of-the-art and it's getting at about 25 percent of human normalized score here is a broken out for the individual games and curls
01:37:05
01:37:42
5825
5862
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5825s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
outperforming proxy they are fairly consistently what's simple coming in first on it's still two of them so computers are all matched human data efficiency it's good question human normalized algorithm score we see on the freeway and on Janus bond that we get pretty much the level of human efficiency for the other games is a little bit of a way to go but it is not rotates night and don't know it's already double-digit percentage performance relative to human on almost all of them okay so we looked at two main directions in representation
01:37:42
01:38:22
5862
5902
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5862s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
learning in reinforced going so far using auxiliary losses and doing things down if I couldn't come down to trying to recover on the line state with a self supervised type loss now there are only ways representation that I can help mainly an exploration which is one of the big challenges in reinforced learning and in Austin for unsupervised feel discovery so let's look at those two now first we can help exploration is through exploration bonuses so what's the idea here in a tabular scenario meaning a very small
01:38:22
01:38:56
5902
5936
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5902s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
reinforcement problem where the number of states you can visit you can count but say there's only you know a good grid world addition to being only one of sixteen squares that's it one of sixteen possible states a very simple thing new is you give a bonus to the agent for visiting grid squares it hasn't been to before or hasn't been frequently before that encourages going and checking things out that you have don't have much experience with yet that can be very effective in is small environments but its impact of a large continuous state
01:38:56
01:39:26
5936
5966
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5936s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
space is because in a large but they infinite States build infinitely many splits well there's always more stuff you haven't seen so you need a different way of measuring what makes something new versus something already may be understood so one big breaker in the space wants to look at using generic model in this case a pixel CN n for density estimation so the idea here is you planet our game or the agents playing at target you want to measure how often has the agent been in the stick but if you'd never special
01:39:26
01:40:02
5966
6002
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=5966s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
specific stage there's too many of them so still we're gonna do is women train a pixel CNN model on what you see on the screen and things you've seen so far the more often you've seen something the higher the log-likelihood under that pixel CNN model but when you let's say enter a new room in this game first time you enter the new room the log likelihood of that new thing you see on the screen will be very very low it'll be a bad score then it's a signal that this is something you need to explore because you're unfamiliar with it as
01:40:02
01:40:36
6002
6036
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6002s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
measured by the flow log likelihood score as you can effectively give exploration bonuses now based on the log likelihood scores under your pixel CNN model that you're trained online has your age and this acting in the world there's a comparison here between using the odds versus just using random exploration and it helps a lot another way to do this you can train a variational honor encoder which leads to an embedding and then you can mount these embeddings into a hash table and just do counting in that a hash table
01:40:36
01:41:09
6036
6069
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6036s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
and that's something we did a couple years ago Amin helps a lot in terms of giving out the right kind of exploration incentives to explore difficult to explore environments more efficiently another thing you can do that kind of gets maybe more at the core of what you really want but it's a little more complicated to set up is for information maximizing exploration so the idea here is the following when you are in a new situation what what makes a deal what makes it interesting about it being new well one way to measure this
01:41:09
01:41:43
6069
6103
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6069s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
is to say hey if I'm in a situation where after taking Archie I cannot predict what's happening next very well then I'm not familiar with this so I should give a bonus for you know having gone into unfamiliar territory that's called curiosity we'll cover that in a moment especially been pretty successful but actually it's also a little defective because if you just have something that's too passive in the world let's say you roll some dice well it's gonna be unpredictable so to make this more charitable one thing you can do is you can say hey I
01:41:43
01:42:22
6103
6142
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6103s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
don't want to be getting exploration bonuses when something is inherently unpredictable how I'm going to get them what it's something that's unpredictable because I have not learned enough yet about this and so the way we did this environment COK you can set up a dynamics model that you're learning and as you learn the dynamics model as Nydia that comes in you can see we actually set up a a posterior over dynamics policy of the distribution over possible dynamics models as new data comes in you get that posterior
01:42:22
01:42:56
6142
6176
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6142s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
if that updated posterior is very different from a previous posterior it means that you got interesting information it allows you to learn something about how the world works so that should give you an exploration bonus because you did something interesting to learn about the world but when throwing the dice addition guys rolled many many times and then rolls again and you couldn't predict it because that's just awareness you cannot predict but your model for the dice will already say it's uniform you know over
01:42:56
01:43:21
6176
6201
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6176s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
all possible outcomes that model will not see much update if any and you will not be given an exploration products and so that's the idea in vine only get exploration bonuses when it updates your posterior over how the world works and again showing here that that helps a lot in terms of exploring more efficiently under the hood that's really self supervising type ideas for a dead and small ensembles or based on representations of the AMEX models and been given exploration bonuses based on that the simple version of that is
01:43:21
01:43:54
6201
6234
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6201s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
called curiosity we're more directly look at you know was something pretty cool or not pretty quiet more the domestic environment often that's actually enough and that's in a lot of success in many of these game environments another thing you could do with self that's learning a representation learning for exploration is to think about it in a more deliberate way you could say hey it's notice about getting bonuses after it's being something new it should also be about thinking about what I should even do before I experience it can set a goal
01:43:54
01:44:27
6234
6267
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6234s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
for myself that makes for a good goal when I'm trying to explore train goal again what's done is the idea is the following you have a in this case let's look at iteration 5 down here the other a set of points that you've reached in this maze you start the bottom left you did a bunch of runs to reach a set of points and where you notice is that the way ascetic goals in the green area unable to consistently achieve your goals we accepted in the blue area it's high variance and some in the red area you should don't achieve your goals we
01:44:27
01:45:05
6267
6305
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6267s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
can induce and say oh actually in the future set my goals in the blue / red area cuz that's the frontier of what I know how to do and so how you're gonna do that you're gonna learn some kind of generative model to generate goals in that regime then go again did you ever have a cell network strained to them generate goals at the frontier of what you're capable of and this allows you to explore Avars much more efficiently because Mary is setting goals to go to places at the frontier of your capability so you continue expanding
01:45:05
01:45:35
6305
6335
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6305s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
your skills you can also do this with a various auto-encoder that's done in rig where the traditional auto-encoder is generating new goals it's those goals and I'm silly not this frontier in the same where they're essentially goals that are similar to things you've seen in the past but the hope is that frequently enough you are the frontier that you learn relatively quickly no can also read way those goals based on how you know how much they're at the frontier measured in something called skew fit which is an expansion to this
01:45:35
01:46:06
6335
6366
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6335s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
paper that sometimes changes the sampling in late in space to get closer to sampling from the frontier rather than just from what you've seen in the past so brick itself here are some examples of this in action you see robots learning to breach and to push so that's the kind of thing that channel is pretty hard to explore for because normally a robot would just be waving in the air and so forth here you can you know set goals that relate to moving objects around and then it would be inclined to move towards object and
01:46:06
01:46:43
6366
6403
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6366s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
wisdom now another thing you can do in terms of exploration and leveraging chariot of models or once about smalls is skill transfer and this should remind you of how we initially motivated unsupervised learning or some of the motivation which was no transfer learning can be very effective with deep fuel nuts now would it be nice if it'll be translating from a task that does not require labels on to a task that requires labels that's transfer from one surprise learning task to them fine tuned when a supervised task well
01:46:43
01:47:17
6403
6437
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6403s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
similar ideas can be a planned reinforcement money so what's going on here so far we mostly talked about going from observations to state those kind of representation money but there's another type ropes and fish line that matters for position learning around objectives behaviors tasks the question here is how do you supervisors and their learning for these things what's the contrast what's done now to explore you maybe put some noise in your action and that way you have some random behavior you might explore something
01:47:17
01:47:47
6437
6467
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6437s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
interesting gonna take a long time sometimes that's shown to be a bit more effective on your explore by putting random is on the whate near no network not only kind of consistently deviate in one way or the other so the good example for the thing on the right works better than thing on the left is let's say is posted for explore a hallway but when I'm left with random walk left right will take very long to get to the end of the hallway and explore both ends of the hallway the one on the right would induce a bias to mark to the right and
01:47:47
01:48:17
6467
6497
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6467s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
maybe with not a random perturbation and do some bars to go to the left and maybe after a couple of robots would have gone to both ends and that's it but it's still really counting on markets it's not it's not really using any more knowledge about experience from the past to explore something new more quickly and that's where the question we're after can we use experience from the past to now learn to do something more quickly for example if you have been in environments shown on the left ear where when you're in the environment and you
01:48:17
01:48:48
6497
6528
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6497s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
don't get to see the red dots the red dots are just for us but imagine we cannot see us or about the red dots and any time you get dropped in environment the reward is I have a spot on that semi circle but you don't know which spot and so you have to go find that reward after a while you should realize done I should go to the semi circle and see which Pollan's semi circle has the reward and that will be a more efficient exploration than to just randomly walk around in this 2d world and then randomly maybe run into the reward on
01:48:48
01:49:16
6528
6556
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6528s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
that semi circle or shown on the right imagine they're supposed to push a block onto the red flat target in the back but you don't know which block you're supposed to push well you'd have a very good strategy saying I push the purple one mmm no reward okay I'm gonna try to push the green one you know her would try to push the silent one Norway which is the yellow one I reward I push the yellow one again and keep collecting reward that's what we would do I see much but how do we get that kind of exploration behavior that's much
01:49:16
01:49:47
6556
6587
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6556s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
more targeted than random motions into an edge and how to get to learn to do that well what we really want then is somehow a representation of behaviors for example pushing objects makes for an interesting behavior that often relates to reward whereas random motion that where the gripper does not interact with objects will rarely be interesting and rarely lead to rewards that's the kind of thing we want to learn in our representation of behaviors it is one way we can do that this is supervised the bridge but just
01:49:47
01:50:19
6587
6619
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6587s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
doesn't just set some context or not ones about supervised for now but it will go from supervised and transfer to an unsupervised and transfer very soon imagine of many many tasks for each task you have a discrete indexing through the top which is turn into an embedding then I've been expended to the policy the currents data observation fed into the policy nopales take action if you train this policy for many many tasks at the same time then it'll learn depending on what task representative with this index D to take a good action for that task
01:50:19
01:50:54
6619
6654
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6619s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
but now the additional thing done here is that this latent code Z here is forced to come from a normal distribution what does that do the normal distribution means that even the future we don't know what the task is nobody tells us what the task is there might be a new task we can actually sample from this distribution to get exploratory behavior so you say oh let's sample this you know sample Batsy and the possible still do something very directed something that relates to maybe interacting with objects as opposed to
01:50:54
01:51:26
6654
6686
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6654s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
just some random chittering to make this even stronger the case there's a mutual information objective between each directory and the latent variables see here so turns out there's actually help so you learn in a bunch of tasks this way and they have a new task and you explore by generating Latham code see and then someone you'll finally I can carry that actually leads to good behavior and you'll start collecting higher reward coming Bank is low less supervise where there's a little differently let me canoes and say well let's not
01:51:26
01:52:00
6686
6720
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6686s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
even have discreet tasks to mixing let's just have a late until it going in and when a policy that pays attention to the latent code while collecting your work why would that happen well there will still be many tasks under the hood but we're not telling it the indexes of the task we're just letting experience reward and so what I'll learn to do is they'll learn to sample a see he would have got zv4 successful behavior with dust and it'll reinforce that see if it doesn't it'll have to sample a different Z and so forth so here's some tasks
01:52:00
01:52:32
6720
6752
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6720s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
family is every dot in the semester Birkin spreads to a different task so we hope here that it would learn to associate different Z's with different spots in the semicircle such that when it later explores by sampling different G's it would go to different spots in the semicircle I mean the one that's successful be able to reinforce that same for the wheeled robot here and here's the block pushing tasks looking at the learning curves we see that indeed by getting to because it getting to pre trained on this notion of
01:52:32
01:53:05
6752
6785
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6752s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
indexing into tasks or a dispute over tossed and then be able to explore by sampling possible tasks it's able to them in blue here learn very quickly to solve new tasks compared to other approaches the generated behaviors we see are also very explored at Reseda we explored their behaviors indeed respond to visiting the semicircle and this gives the wheeled robot in the mill here it's Thea walking robot on the right is the block pushing what is look like in a human projet didn't do representation learning for exploration
01:53:05
01:53:37
6785
6817
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6785s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
behaviors you and instead of having this nice push behavior should have just some jittery behavior of the robot gripper that wouldn't really interact with the blocks or get any block to the target area after it's done those exploratory behaviors of course the next single will happens a policy grand update that will update the policy to essentially sample Z from a more focused distribution that focuses on the part of the session will correspond to the part a semi-circle what the target is or the block that needs to be pushed okay now what we did
01:53:37
01:54:11
6817
6851
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6817s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
here was transfer from having a set of tasks to now solving a new task relatively quickly by having good exploration behavior but we still needed to define a set of tasks and then transfer from that to question how is going to completely unsupervised school we just have the robot we're on its own to learn a range of behaviors and another test time Explorer in a meaningful way to Zone in on specific skill quickly take a look so it's actually multiple lines of work that effectively do the same thing but try a different object there's but the same
01:54:11
01:54:50
6851
6890
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6851s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
high-level idea so the hang of ideas we're still gonna have a policy PI that conditions his actions on the observation the current stayed near and a latent code which might or might not come from a discreet code bubbly has come from a latent critical to a normal distribution so we can resample this in the future this will solving trajectories and so the way we're going to pre train this is by saying that there needs to be high mutual information between its directory that results from this policy and the latent code is acting based upon so you
01:54:50
01:55:22
6890
6922
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6890s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
start you roll out at the beginning of you roll out your sample Z you keep z fixed for the entire roll out to get a trajectory you want the trajectory to say something about whatever Z was that you used for this trajectory what does it mean that high missile bases in Turkey Thailand see what we measured in many ways and that's what these four different papers are the first paper which are actually discrete variable in a directory and the second paper looks at B and the final state the third paper looks at Z and every
01:55:22
01:55:54
6922
6954
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6922s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
intermediate state independently some together and then the fourth one looks at Z and the full trajectory as a whole and they all get fairly similar results actually so here's the third tapered Eisenbach a tall paper showing a range of behaviors that comes out of this when you apply this to the cheetah robot so for different disease you get different behaviors here I mean we see how our mission information people use different Z's and the trajectories look very different to us may not indeed a different Z results in a very different
01:55:54
01:56:28
6954
6988
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6954s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
directory and of course the beauteous ones is learn to check out all these behaviors for different Z's now at test time you need to do something else but they need to run out of certain speed either will be Z's that already correspond to running forward and then you can fine-tune the Z directly around with to learn a policy it isn't to figure out the Z that will result in the behavior that you want here are some videos from the paper that make a model paper looking at and curating all kinds of different trajectories correspond to
01:56:28
01:57:02
6988
7022
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=6988s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
responding to all kinds of different latent variables see so we see pretty same latent variable see same kind of directory gets output and here's some more videos well some of these cannot be played for some reason but here's a cheetah robot the eggs I'm a tall approaching so this kind of just not too sure that you know they camera at all might be better than the awesome burger dollar I think it's just a show that actually is very similar that so the the difference in those four objectives might not be too important
01:57:02
01:57:47
7022
7067
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7022s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
actually some limitations in this approach this coaster comes from an ATM at all paper is that when you have a humanoid which is very high dimensional compared to cheetah which is essentially just kind of stands up or runs on its head humanoid is high dimensional you try to find financial information behaviors between Si and trajectories you can it can take a long time or it can have a lot of mutual information with all trajectories actually being on the ground because there's a lot of different things you can do on the
01:57:47
01:58:21
7067
7101
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7067s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
ground and it's not something where you necessarily automatically get it to run around this running is very hard to learn where I was doing all kind of different tricks on the ground is much much easier okay so let me summarize what we covered today would cover a lot of ground much more quickly than in most of our other lectures because this lecture here is more of a sampling of ideas of how representation learning and reinforcing have come together in theorists place and you know a very deep dive in any one
01:58:21
01:58:53
7101
7133
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7101s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
of them as we've done in previous lectures the big high level ideas are that we attain a neural network and beep reinforcement learning your mind is looking at auxilary losses and if those losses are related to your task well it might help you to learn more quickly than if you did not have those exact losses and of course the most economical paper there was the Unreal paper under the hood a lot of disguise to state representation if we have high dimensional image inputs well hopefully under the hood in this task often there
01:58:53
01:59:28
7133
7168
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7133s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
is a low dimensional state and so there's many things you can do to try to extract latent representation that is closer to state than there is no pixels once you're working without lady representation closer to state or maybe even match to a state learning might go along more quickly and in fact we've seen that with the curl approach it's possible to learn almost as quickly from pixels as from spit it's not just about turning a raw sensor observation into a state there's other things you can do with representation lying in our own you
01:59:28
02:00:05
7168
7205
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7168s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
can have it help with exploration you can have it help an exploration by helping you generate exploration bonuses especially measured things that are new canonically and tabular environment she measured by you know visitation rates but in high dimensional spaces you'll always visit new states so you need to measure how different that new state is from past is which which you can do it to narrative models and my clearance another thing you can do in terms of exploration is you can think about generic models for behaviors that are
02:00:05
02:00:38
7205
7238
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7205s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
interesting such that mount exploration becomes a matter of behavior generation rather than random action all the time or you can learn to narrative models for goals that might be interesting to set and then set goals with your generic model for a reinforcement agent to try to achieve to expand its frontiers of capabilities and I'm not if you can do is ultra by skill discovery don't suppose skill discovery what we do is we essentially have no reward at all in a pre training phase but the hope is that the agent nevertheless starts exhibiting
02:00:38
02:01:14
7238
7274
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=7238s
https://i.ytimg.com/vi/Y…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
to be able to introduce Alec Radford Alec Radford is a research scientist at open the eye alec has pioneered many of the latest advances in AI for natural language processing you might be familiar already with GPT and GPT to which Alec led those efforts at open AI and of course earlier in the semester we covered BC GN which was the first Gann incarnation that could start generating realistic looking images and that was also led by Alex it's a real honor to have Alec with us today and yeah now Mike please take it away from here
00:00:00
00:00:39
0
39
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=0s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
yeah totally I'm super excited to be here and present because this course is like my favorite research topic unsupervised learning and yeah just really excited to chat with you all today so today I'm gonna focus on the NLP and tech side and I'm just gonna start the timer and today I'll be talking about just kind of generally learning from text in a scalable unsupervised kind of fashion kind of give a history of the field and some of the you know main techniques and approaches and kind of walk through the methods and kind of where we are today
00:00:39
00:01:10
39
70
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=39s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
as well as providing some commentary on kind of supervised learning versus on supervised learning in NLP and why I think you know unsupervised methods are so important in this space yeah so let's I guess get started so learning from text you know one of the I think prerequisites to kind of start with is standard supervised learning requires kind of you know what we'd say is machine learning great data and what I mean by that is kind of your canonical machine learning data set is something at least in an academic context is
00:01:10
00:01:42
70
102
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=70s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
something like you go use a crowd worker pipeline and you very carefully curate gold labeled standards for some data you're trying to annotate and this is a pretty involved expensive process and you often are emphasizing kind of quality and specificity and preciseness to the thing you care about the task you're trying to predict and maybe a very specific targeted data distribution and what this often means is you get a small amount of very high quality data and even some of the largest efforts in space just because you have paid human
00:01:42
00:02:16
102
136
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=102s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
feedback often involved and sometimes your ensemble in predictions of three five or more laborers it's often a few hundred thousand examples is like a big data set especially for NLP in computer vision you sometimes see you know things like imagenet where they push that to a million or ten million but those are kind of afar outliers and you know very many canonical NLP datasets might only have five or 10,000 labeled examples so there's not really a lot of machine learning great data out there at least compared to what kind of the current
00:02:16
00:02:49
136
169
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=136s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
learning complexities and efficiencies of current models are you know one of the primary criticisms of modern supervised learning deep learning in particular is how data intensive it is so we really have to get that number down and this lecture is basically going to be discussing all the variety of methods that have been developed for using natural language that kind of is available beyond kind of the machine learning great data and unsupervised or scalable self supervised methods for hoping to somehow pre-trained do some
00:02:49
00:03:18
169
198
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=169s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
auxilary objective or tasks or you know hand design some method that allows you to improve performance once you flip the switch and go to a supervised learning on the standard machine learning great data or in the limit as we'll talk later get rid of the need entirely to have a classic supervised learning data side and potentially begin to learn tasks in a purely unsupervised way and evaluate them in a like zero shot setting so there's a variety of methods this lecture is going to focus primarily on auto regressive maximum-likelihood
00:03:18
00:03:51
198
231
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=198s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
language models they're kind of the core and I think they're the most common uniting thread that kind of carries the early days this field through to kind of the current modern methods but I want to you know make clear at the front that there's many proxy objectives and tasks that have been designed in actual image processing to somehow you know do something before the thing you care about in order to do better on my thing you care about and there's quite a lot and in particular in the last year or two we've now seen that area really kind
00:03:51
00:04:20
231
260
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=231s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
of grow dramatically and in many cases they now I'll perform the standard language model based methods that I kind of will as the core of the presentation and we'll talk more about the details of the differences as we get to those parts so some more motivation in intro as we've kind of going so I think I think one of the ways to think about this is like what do we do with the Internet so you know the wild Internet appears and you can either have your glowing brain ask representation on the left we can laugh at or we can make it you know how messy
00:04:20
00:04:51
260
291
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=260s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
and random and weird and difficult it might be for algorithms to learn from it on the right so that's good old Geocities and so you know there's a lot of skepticism I think about kind of these approaches that might kind of at the highest level look kind of silly or kind of whimsical to be like let's just throw an algorithm at the internet and see what comes out the other end but I think that's actually kind of one of the like one seven summaries of basically what modern NLP has been seeing a lot of success from and you know I think one of
00:04:51
00:05:21
291
321
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=291s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
the reasons why is just the Internet is so big there's so much data on it and we're starting to see some very exciting methods of learning from this kind of messy large-scale and curated data and so there's a great tweet from from an LP researcher just kind of showing just how is are they big and you know kind of just massive the Internet is where you can go and find an article about how to open doors and you know there's often a lot of arguments saying that oh you know we're not going to you know and it feels
00:05:21
00:05:52
321
352
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=321s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
wrong in the limit to be like yes let's just throw algorithms at the internet and see what happens like that doesn't match human experience that doesn't match kind of the grounded embodied agents that you know we think of you know intelligent systems and instead is this kind of just like processing bits or abstract tokens and so there's a lot of skepticism about this approach but I think that just quantities of scale and other methods play very well with current techniques and you know you see lots of arguments about things like oh
00:05:52
00:06:18
352
378
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=352s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
there's this long tail and we're never going to be able to deal with composition and really it's just maybe brute force can get us surprisingly far in the in the new term not saying that these methods or techniques are and I'll be all but at least today there's I think strong evidence that we shouldn't dismiss this somewhat silly approach at a high level so let's start with kind of I think what would be the like simplest starting point that we can convert from this kind of high-level idea into something that
00:06:18
00:06:50
378
410
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=378s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
looks like a machine learning algorithm so we process a bunch of texts on the internet let's say and we're going to build this matrix called the word word co-occurrence matrix and so what we can kind of think of is it's a square matrix where the ith entry corresponds to for a given word like water the count of another word and whether they co-occur with each other so it might be you have to define when a co-occurrence is so that just means that the two happened to be present together and you might define a window of this for instance they both
00:06:50
00:07:22
410
442
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=410s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
occur in the same sentence or within five words of each other or in the limit you can go quite far with like just happen to occur in the same document on the internet and so you're just gonna brute force kind of countless it's just counting that's all it is we're just going over you know tons and tons of text and we're just building up this this table basically so just a lookup table and it just tells you oh the word steam and water co-occur 250 times or you know the word steam is just in the data set 3 to 24 times total or you know
00:07:22
00:07:49
442
469
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=442s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
the words hot and water you know 19500 forty times so that's all we're doing and this is a way you know one this is incredibly scalable you can just run a spark job over the entire internet with this kind of system you can quickly get this giant table and it's you know I'm not computationally intensive it's just counting and processing and tokenization this thing can be run on a common desktop and get very far and it's simple it's just counting so how good is counting a bunch of stuff like we're we're talking about something incredibly
00:07:49
00:08:22
469
502
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=469s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
basic it's just kind of how often do these two things occur together and I think you know one of one of the big takeaways that I'm gonna have a lot of during this presentation is just how far these simple methods that are scalable and with large amounts of data can get so this is a great example of a paper called combining retrieval statistics and inference to answer elementary science questions it's from Clark at all AI - from 2016 and what they do is they take the same data structure this word word co-occurrence matrix
00:08:22
00:08:53
502
533
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=502s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
they did let me start with the task so the task is elementary science questions so it's just I believe through 5th grade kind of you know elementary school kind of simple settings questions so they're multiple choice therefore no possible answers and there are these kind of simple things like a student crumpled up a flat sheet of paper into ramble what property the one who changed hardness color master shape or you know what property of a mirror makes it possible for a student to see an image in it is it volume magnetism reflectiveness or
00:08:53
00:09:25
533
565
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=533s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
connectivity so this is a kind of thing that like you know again it's pretty basic in terms of like the high levels they're you know relatively simple facts and they don't require all that much in the form of reasoning or comprehension but there's still the kind of thing that we do give to is you know kids learning about the world and so you might think that like oh you know this is the kind of thing where to understand a mirror you really need to you know exist in the world and to you know learn about all these properties or to have a teacher
00:09:25
00:09:51
565
591
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=565s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
and how are we gonna get there we're just kind of this brute force thing that just counts a bunch of words and puts them into a table and then starts looking them up and you know the takeaway here is that it can work surprisingly well so you can't quite pass these examples so the specific solver that we're gonna use talking about in a second is the PMI solver and that gets to about 60% but random guesses 25% so we basically almost you know have the error rate and get to addy with just this very dumb brute brute
00:09:51
00:10:22
591
622
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=591s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
force approach so what actually is this solver they call it the point-wise mutual information solver and what you can think of it as is it just scores all of these possible answers so we have this sentence of context of you know the question and then we have you know four possible answers so what we do is we loop over the basically the sentence and we just look for the word toward co-occurrences and we just keep counting them up and we use this scoring formula which is the log of a ratio between two probabilities the first the P of XY is
00:10:22
00:10:56
622
656
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=622s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
the joint which is basically the co-occurrence and so that gets you that count that's basically looking it up directly from that table the IJ entry for XY and then you normal by this kind of baseline assumption which is that the words should not Co occur more than by chance so that would be just their gonna depend of probabilities multiplied together as you can imagine those may be quite small and product into two gether makes them even smaller but some words co-occur together so a mirror occurs with reflective or
00:10:56
00:11:27
656
687
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=656s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
you know electricity occurs with lightning or you know crumpled up might co-occur with like hardness and so that's all this method does is it kind of just says these basic associations between words and that can get you surprisingly far it doesn't feel like real learning you know maybe and it does it's definitely not very human-like but it's just an example of kind of the power of basic methods and how something that you know doesn't involve any you know you know intelligence or hand waving that we might make about you know
00:11:27
00:12:01
687
721
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=687s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
complicated systems it's just a big lookup table you know a smart job but you might run on the internet and it can give you surprisingly far so there's a problem with working with these word to word co-occurrence matrices they're huge so let's say we have a million word vocabulary so we have a million words by a million words just to have the full version naively and then you might store it with in 32 hopefully you don't need them in 64 so that's four bytes so storing this whole matrix in memory in a dense representation is four terabytes
00:12:01
00:12:31
721
751
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=721s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
you know that's still huge for today most machines don't have that much memory in them so and you know if we were to kind of like start working with like how do we use this system or how do we kind of make it more general you know we just have this matrix and there's you can definitely design hand-coded algorithms to let go look up entries and query on it and we see that they can get quite far but you know we'd like to do more and how does this slot into NLP more broadly so we want to come up with a more compact but faithful
00:12:31
00:13:04
751
784
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=751s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
representation of the relations between the words and the information they represent and we could just say that we really just want to find a way of representing this J co-occurrence matrix as something more like what we know from deep learning and machine learning in general so here's the algorithm called glove from and in 10 Li the Stanford NLP in 2014 so we take that matrix of word word co-occurrences like I mentioned it's cheap so you can run this thing when we like a trillion tokens and each entry X I X IJ would be the count of word I
00:13:04
00:13:33
784
813
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=784s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
co-occurring context for J and what we're going to do instead is we're going to you know learning an approximation of this full matrix and the way we're going to do it is we're going to say we're going to redefined a word as a low dimensional or at least compared to you know a million by a million matrix much more low dimensional vector so we're gonna learn a dense distributed representation of a word and all we're gonna say is this very simple model such that we're trying to predict the log prob or the or the log co-occurrence
00:13:33
00:14:03
813
843
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=813s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
counts of the X IJ entry and then we're going to do it is we're gonna look up the rector representation of word I and the vector representation of word J we're just gonna say their dot product should be proportional to the log occurrence count and that's all this is and so it's really simple and you can just use a weighted like square to error loss so that's what this this FX IJ is a basically a weighting function to account for the fact that some words are way more common and you don't want to over train this thing on like those
00:14:03
00:14:34
843
874
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=843s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
words and you might also want to like clip because you might have like extremely long tail frequency distributions and things like that so but at the other day you just have there your ID WJ and you had some bias terms and you're just trying to compare that to the log of the rock codes count so this allows us to go from that giant m by m matrix which might be a million by a million to an M by n matrix where there's M words and each is an N dimensional vector and often this turns out that these can approximate that full
00:14:34
00:15:05
874
905
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=874s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
co-occurrence matrix quite well and they're much much smaller dimensionality so they might be just 300 dimensions and you know there's a question of what does this thing learn and how does it approximate that but empirically it just cannot compress it quite well and this might make sense because you can imagine that so many many words just never occur with each other all that often and in fact simple sparse storage of that full matrix can get a lot smaller already but then we work mostly with them distributed representations these days
00:15:05
00:15:32
905
932
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=905s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
in deep learning so we're going smash it into the framework we know there's another word of this yep the question so do you still have to first build the full matrix and then you run this or so this is a way of having had the full matrix you then run this as a way of like kind of compressing or we representing the matrix chronic Thanks mm-hmm so now as an example where you don't have to build that full matrix so there's another variant of very similar kind of and I think usually a more well-known version of kind of an
00:15:32
00:16:05
932
965
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=932s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
algorithms class called word effect and so weird effect is instead a kind of predictive framework where instead of saying we've got this kind of you know abstract like co-occurrence matrix then we're going to try to like compress it and we represent it as word vectors we're gonna just work with natural sequence of text so you might have you know a five-word sentence like the cat sat on the mat and what you're gonna do is there's going to be a model that's trained to take a local context window like you know the cat said maybe two
00:16:05
00:16:34
965
994
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=965s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
words of past context in two words of next context we're going to do an incredibly simple linear operation like summing them and then we're just going to try to predict that word in the center so this is called the continuous bag of words representation continuous because it's a distributed representation bag of words because the operation that composes the context is just sum or a bag and then we just predict the output and we can parameterize that as like the log probability of the word in the center of the context and there's the inverse
00:16:34
00:17:04
994
1024
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=994s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
version of this which is the skip grand model which given a central word of context tries to predict the window and so this uses kind of the more standard approach of like online training and it just streams over a bunch of examples of text you can use mini-batch training it looks like your standard algorithms now the same way I mentioned some tricks with like the using the log co-occurrence or a real waiting function you need those same kind of things here again many words span many different ranges of frequencies where you might
00:17:04
00:17:33
1024
1053
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1024s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
have words like the be literally 7% of all your data so you if you now usually train or direct algorithm without subsampling or resampling based on the frequency distribution seven percent of your computes going to modeling the word though and then you know some important word in New York City or something or phrase is just basically lost in the noise so we use a real weighting function I believe it's the inverse fifth root so it just works and that just heavily truncates the frequency distribution so they're basically doing the same thing
00:17:33
00:18:05
1053
1085
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1053s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
today this is a predictive framework where it's it takes in a sequence and it tries to predict some subset of that sequence with a very simple linear model and you just have word of the same word event betting table we talked about but they both do about the same thing and they're kind of the canonical first round of distributed or scalable kind of unsupervised cell supervised representations for a multi again there's there's no you know human supervision classically involved in these algorithms they just kind of have
00:18:05
00:18:33
1085
1113
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1085s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
this automated procedure to just churn through large amounts of data and you know we're Tyvek came out of google in like 2013 and you know one of the first things that is written on a big city you cluster with like a very efficient C++ implementation and shove a bunch of words through it and it works really well and so let's kind of talk about what this does so for this graph I'm gonna talk about how I'm gonna interrupt for a moment if you go back yep so on the Left it's the word words are represented by vectors and then your
00:18:33
00:19:04
1113
1144
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1113s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
average and you're supposed to get two vector representing the middle word on the right where did he embedding slid they're the same embeddings wte so they're both inputs and targets so you you would basically slice out some word WT from from your from your list you would then also pull a sequence of context to be predicted like the were before and the word after and then you would have the same prediction objective like the whole or all of that word at that location and there's other approximations that kind of just
00:19:04
00:19:39
1144
1179
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1144s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
embossing over right now how to do this efficiently because computing a full normalization of the predictions over like a full million size vocabulary is very expensive so you often you can use a tree structure or a sub sampling algorithm where you might normalize over only a randomly selected subset and you can weight that subset and things like this all production negative sampling is a prediction some kind of inner product between WT & WT - - or yeah so that would be how you'd get the logit for the log problems it's a
00:19:39
00:20:12
1179
1212
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1179s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
dumb profit as well yeah sorry I should've been clear about that operation thank you cool thanks Alec so yeah what do we do with these things so this is where kind of a lot of the first wave of kind of modern you know modern modern is a contentious word but kind of NLP starting to leverage large-scale and supervised data started figuring out how to use these things so these examples on the left are with glove and what we see is kind of a suite of tasks so there's the Stanford sentiment tree bank which is predicting for a sentence of a movie
00:20:12
00:20:45
1212
1245
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1212s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
review is it a positive review that they like the movie or is it a negative review you know like movie IMDB is another central analysis data set but it's a paragraph of context t-rex six and 50 are classifying kind of types of questions like who what where when and SLI is a much fancier thing of logical entailment so it's kind of measuring the relation between two sentences a premise sentence and a hypothesis sentence and you're basically trying to say given the premise does the following sentence follow logically from
00:20:45
00:21:21
1245
1281
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1245s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
it it being tailed is it kind of irrelevant or containing information that's maybe correct but maybe not wrong which would be a neutral or is it actually a contradiction with the previous sentence so you know it might be the first sentence is like you know a woman is walking a dog and then the second sentence is like a man is playing with a cat and that would just be a contradiction of the first sentence so that's s Noy and it's some sensible objective and it's kind of this more complex operation because it's doing
00:21:21
00:21:50
1281
1310
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1281s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
logical reasoning supposedly and it's doing it over semantic concepts like you might need to know the relations between playing an instrument or you know that saxophone is an instrument so that if the premise is a man playing saxophone you need to know that the hypothesis might be you know in tailing it if it's the man is playing a musical instrument so that one has like kind of an interesting relation to some more semantic content and the final example here is squad which is answering dataset so you get a paragraph
00:21:50
00:22:17
1310
1337
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1310s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
from Wikipedia and you have to predict you know given a question what the answer is from that paragraph and so for all of these data sets again this is a pretty broad suite of tasks you see multiple absolute percentage performance jumps from slotting in word vectors compared to randomly initialized components of the models that were used to predict the so you can always do random initialization kind of standard canonical thing and deep learning or you could use these pre trained vectors and so they really do seem to help in terms
00:22:17
00:22:47
1337
1367
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1337s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
of data efficiency and you can see in some cases like for question answering that you can get a 10% plus absolute improvement here for glove glove plus code is another thing which we'll come to in a bit and you know why might these be helping so much so that's the kind of empirical data well on the right here we kind of have some of the work that God did to kind of inspect the properties of these word of vectors so they would for instance have a query vector like the word frog and then they would show all of the different possible nearest words
00:22:47
00:23:18
1367
1398
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1367s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
in terms of just cosine similarity to that first word so you can see that you know immediately it's the plural version of it frog two frogs and you know toad is very similar to frog Ronna is like I guess more scientific you name and then you get slightly farther on things like wizard so you can see how that can simplify the problem space if we have a distributed model and we have an input that's asking a question about a frog if we don't have any knowledge of the structure of language or the relations between the word frog and toad it's you
00:23:18
00:23:50
1398
1430
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1398s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
know naive or basically impossible for that model to then potentially generalize the same question asked about a toad instead but if we have this dense distributed representation that is bringing together these words kind of into this similar feature space then you might expect that well if the you know representation frog is very similar the representation for toad the model might just be able to generalize and handle that and you know there's even more relations and properties which is beyond just similarity in that embedding space
00:23:50
00:24:17
1430
1457
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1430s
https://i.ytimg.com/vi/B…axresdefault.jpg