video_id
stringlengths
11
11
title
stringlengths
0
100
text
stringlengths
513
648
start_timestamp
stringlengths
8
8
end_timestamp
stringlengths
8
8
start_second
stringlengths
1
5
end_second
stringlengths
2
5
url
stringlengths
48
52
thumbnail
stringlengths
0
52
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
quite better but my confusion is why would it be like a normal distribution like isn't each key represented by a single value in the lookup table it's a constant right so why would it have a distribution um so I'm not really sure what like if you're given a string you just count right and then out of the count you that's a constant let's say it's all just restrict to a through Z then for each of the character you have a basically the probability occurs in this stream and that's a constant value so why would that have a distribution so
02:37:17
02:38:01
9437
9481
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9437s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
maybe let let's back up for a bit and just like talk about what we're trying to do so what we're trying to do is to turn the latent variable model into a compression algorithm so just starting from square root 1 we have a ve of this what's what's the input of the Dae an image it's a stream right let's say for 1d case is it a stream time I propose we can write questions here offline because we've got a lot of cover yes okay yeah happy to talk about this later yes ok so here is a description of the same same thing so during the encoding
02:38:01
02:38:54
9481
9534
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9481s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
phase the decoder decodes from the bit stream then encodes X and Z and you can also check that this is decodable so if you just run everything in Reverse you you just end up getting yeah getting X so you decode Z you decode X and then you can re encode see I'm using using a should actually be Q and re-encoding part is this getting bits back here so once you rien code Z once a receiver re-encode Z then the receiver gets now a slightly longer bid stream from which I can start to decode the next C so that's so so those are exactly
02:38:54
02:39:45
9534
9585
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9534s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
the bits that were gone back right here okay so there are two sort of two points that we should talk about when when getting B ba and s working with continuously variable models like via East which is that disease these are continuous so Z is comes from a standard normal distribution and so we can't really cope continuous data but what we can do is district ice it to some high precision and so if you take Z and you discretize it to some level Delta Z then you pretty much turned a probability density function by P of Z and you turn
02:39:45
02:40:33
9585
9633
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9585s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
it into a probability mass function for capital P of Z which is Z times Delta Z so what you get by integrating a density over this small region of volume Delta Z and so you can do that for both the posterior in the prior so you do that for the prior and you do it for the posterior and you see that these deltas use cancel out and so so we get is that this bits back code length with the discretization being the same between the prior and the posterior still gives you the same KL divergence term in the BAE the second point was that that somebody
02:40:33
02:41:18
9633
9678
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9633s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
brought up is that we we decode Z from the bitstream and that's how we sample from the bit stream is by decoding from it that's how we sample Z but in order for that to really give us a good sample the bits that we that we decode from have to be actually random and so that's not necessarily true and so in a VA II the last Z if you just sort of work out what's going on basically if this KL divergence between this aggregate posterior Q of Z and the prior is small then that means those bits will be random or pretty good and that'll be
02:41:18
02:42:07
9678
9727
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9678s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
good enough to get a good sample but of course in practice for a VA that's not trained exactly well this isn't gonna be nonzero but in practice it seems like this doesn't seem to matter too much I think one thing that might actually work to ensure that the bits are random which I haven't seen explored is to just encrypt the bit stream and that'll make the bits look random and then you can decode anything from it so I think in practice is not a problem and with nice is that this the scheme fits back with ans seems to work pretty
02:42:07
02:42:44
9727
9764
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9727s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
well um so the authors of this paper implemented this it's back ans algorithm for ve strain on em mist and they found that the numbers that we got were very close are pretty much the same as this as the variational is negative variational the variational bound on the negative block likelihood which is exactly what you want that's what is predicted so this thing works as well as as as advertised you right so in our work what we did was we looked at weight in variable models which are not just one layer so we know
02:42:44
02:43:32
9764
9812
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9764s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
that the the more powerful the the model the better the log likelihoods that we get out of it so we should get better compression so here we're looking at a setting where the model is has a Markov chain structure over the wing variables so there lady variables ZL the L minus one up and seal the one the necks so this is the graphical model of a sampling path and and the the inference path accuse go going the other way and they're both Markov chains so this is a particular type of model that we're looking at and so if you had this
02:43:32
02:44:18
9812
9858
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9812s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
particular model which is this this sort of chain structure there are two ways to view it you can just view it as a via e with this block of latent variables as just one latent variable and then you can run bits by B ba and s1 and then that works perfectly fine but another way to view it is to view it as a leak variable model with just a little just draw the layers again so here's X 1 Z 2 3 so you can just view Z 1 as the one and only latent variable but then you see that it's prior is a VA with the same structure so it's prior is P of Z 1
02:44:18
02:45:05
9858
9905
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9858s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
which is a VA II where it's prior is P of Z 2 which is another V and so on so these are two equivalent ways of looking at the same model so in terms of log likelihood they're the same because that if you just write down v variational bounds they're equal but they suggest slightly different compression algorithms with different practical consequences so the idea is that instead of just treating disease as one single block of one large latent variable you can actually recursively invoke its back coding into the prior so you can just
02:45:05
02:45:47
9905
9947
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9905s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
code the first variable so here this is the algorithm for just as usual so here this is basically um decode Z and then encode X and then encode the prior so this is P of X given Z mmm-hmm and here Z given X what you can do instead is code just the first layer and then recursively invoke its back coding into the C the subsequent layers right and so the the consequence of doing this is that I won't go through these the exact steps but the consequence is that you no longer have to decode the entire block of latent
02:45:47
02:46:44
9947
10004
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9947s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
variables at the very first step rather you you just need to decode one of them and then you get some you can add more to the bit stream and then decode more and so on and what that means is that you need fewer auxiliary bits to start bits back coding so what this means is that remember before BBN has to make sense you need a bit stream with some bits on it to even sample Z in the first place and those bits must be sent across and if you you don't have any and if there are no bits there then you end up wasting them and so if you're able to
02:46:44
02:47:23
10004
10043
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10004s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
not have to decode it too many link variables in the first go then you can save on transmitting those logs iliad bits so you can see that in these experiments for especially deep latent variable models we're able to get better code lengths compared to just decoding the entire block of latent variables at once you right so that was via E's so let's just move on to how to trim flow models into compression algorithms so in this class we went through a series of likelihood based models like Auto regressive models
02:47:23
02:48:05
10043
10085
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10043s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
reason flows and we were seeing in this lecture that really any likely hit based model is as a compression algorithm so what about flow models they should also be compression algorithms and so what what's particularly appealing about them is that you get this we can write down the exact the exact log likelihood flow model this is not a bound this is just the real thing so hopefully we should be able to get some really good compression with this so let's think about what that actually means it turns out that it
02:48:05
02:48:39
10085
10119
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10085s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
doesn't really make sense to say that to get a compression algorithm that achieves this code length which is that which is just this flow log likelihood formula and the reason is that flows are density models and it doesn't make sense to code continuous data because it just needs you need infinite precision to do that so rather well we're gonna say is that will code data discretize the high precision so you you have your space of data like this let's say this is the space of all possible images and then we just tile it with these with this very
02:48:39
02:49:12
10119
10152
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10119s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
fine grid like this and then we're just going to discretize every possible data point so instead of coding one data point exactly we'll just coat the biddin that it lies in like that so that's G of X is like the cube bed that's some data point lies in and the point of doing this is that if if you define a probability mass function given by integrating this density given by the flow over these cubes and then you get then you get a negative log likelihood that looks like this it's just this the density times Delta negative plug
02:49:12
02:49:50
10152
10190
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10152s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
density times Delta so this is now a probability mass function and now it makes sense to say that we can compress up to this cook length so actually the code length that we're going to look for when we compress with flow models is this it's negative like this the flow times Delta so it's really just the same thing plus this additional term here so this is just the number of bits of discretization so it can actually be a lot of bits but then we can recover them later right so yeah so now we have a probability mass function I can Chris
02:49:50
02:50:28
10190
10228
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10190s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
wants to flow so can we just run Huffman coding so the answer is no because to do that we need to build a tree that's as big as the number of possible data points but these are we're working with large images here so that's that's exponential in the dimension so that's not tractable so we we need to harness the model structure we actually we actually have to make use of the fact that this is a flow model so one naive attempt to do this maybe this just the most intuitive thing is to take the latent that you get out of the
02:50:28
02:51:01
10228
10261
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10228s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
flow let's say we want to code X so why not just compute Z when that just compute Z by passing it through the flow and coding back using the prior so so maybe that's very simple but unfortunately doesn't work so you can just write down the code link that you get it's just negative log P of Z times let's say Delta but if a flow model is its trained well then the distribution of Z's will match the prior so you end up just let's say the prior is Gaussian so you end up coding Gaussian noise using a Gaussian prior so that's no
02:51:01
02:51:41
10261
10301
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10261s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
compression at all and so if you compare this expression with this expression up here you see that this is the missing term so somehow this this naive approach does not take into effect take into account this jacobian the the fact that the flow changes the volume so we have to somehow deal with that right okay so how do we do this well the claim is that we can turn any flow model into a VA actually we can locally approximate it using a VA so we have this flow model that takes here's F here's the full model and
02:51:41
02:52:26
10301
10346
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10301s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
here's here's a point X and we turn it into f of X this is just what a flow does and so we can define this distribution here so this distribution on the left this like this this ellipsoid and we're going to define it to be a standard normal where the mean is just f of X it's just the length but we give it this covariance matrix which is Sigma squared is just a small number like 0.0001 or something like that just some hyper parameter to this algorithm times the Jacobian times Jacobian transpose of the flow model and and so
02:52:26
02:53:12
10346
10392
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10346s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
this is what we call the encoder and this is a decoder so we so on the float on top of the flow model we define this encoder and decoder and the decoder is just the inverse of the flow with this identity covariance such as very small so that's what these two is ellipsoid on the left and their small circle on the right are so what why did we define this well the point is that if for a flow model is represents a differentiable function so if if you have some data point X and we had a very small amount of noise to it and then you
02:53:12
02:53:49
10392
10429
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10392s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
map that to the latent space that small that see that small amount of Gaussian noise that you added at the beginning will also be Gaussian it'll just have this twist there down covariance and that's given by how the flow behaves linearly and that's just the Jacobian so we know that if you take a multivariate Gaussian and you multiply by a matrix you also get a multivariate Gaussian so locally the the flow behaves like a linear transformation and that matrix is the Jacobian so that that's what that's where this comes from and the point is
02:53:49
02:54:28
10429
10468
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10429s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
that if you then run bits back coding using these two distributions so this was Z given X here Q this P of X given Z so if you run Kota using these two distributions the code link that you get from bits fat coding will be exactly what we wanted plus this little error term which is this second-order error term so so that this is how you turn or this is a way of turning a flu model into a compression algorithm is to convert it into is to locally approximate it with a certain beauty to find like this and then the
02:54:28
02:55:09
10468
10509
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10468s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
code link that you get from from bits back coding on that Wilmette that's what we wanted plus a very small error term so what what's nice about this is that it turns an intractable algorithm into a more tractable one so if you wish to directly implement this algorithm it turns out you do have to compute the Jacobian of the flu model and you do have to factorize it in a certain way and so that's that's polynomial time it's better than exponential time but it's still not good enough for high dimensional data and so the
02:55:09
02:55:43
10509
10543
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10509s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
solution to that is that we can actually specify we can we can specialize this algorithm you even further and so for a lot of regressive flows for example it turns out that we can just code one dimension at a time without ever constructing that Jacobian so that works in linear time if we have a composition of flows like we like we do in real MVPs and then you can just code one layer at a time and we recursively invoke this coding into the next layer but just like we can with hierarchical Yogi's so all together for real MVP type flows if you
02:55:43
02:56:17
10543
10577
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10543s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
implement it correctly you don't need to compute the Jacobi and ever and you get a you actually get a linear time compression algorithm you so the so that's nice so and we achieve this code length here which is negative log density times Delta but if you look at this this this suffers by terms of negative log Delta X which can be like actually quite bad like 32 bits or something like that so this is because we had to discretize the data a lot so that we can actually approximate the integral that defines a probability mass
02:56:17
02:56:55
10577
10615
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10577s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
function easily so that seems like a huge waste of bits especially if we want to transmit say integer data like images from CFR for example or it specifies its are specified as integers and we don't want to have to transmit lots of bits after the decimal point so the solution to this is to use those extra bits four bits back again and so if if you want to do that it turns out that there is an optimal way of doing this and it's and this sort of encoder that you use for that is a D quantizer which I think we talked
02:56:55
02:57:32
10615
10652
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10615s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
about and so if you plug it's back coding to the D quantizer to get those extra bits and then altogether the code length you get is the variational D quantization bound which is what you explicit you train to to be small on on the data set so it ends up being a reasonable you and so with all this stuff we tried it for one of the models that we trained and we found that we were able to get some code links that are very close to what is predicted by the variational T quantization bound and this sort of holds across all these data sets
02:57:32
02:58:17
10652
10697
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10652s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
um and there is a caveat which is that this algorithm doesn't need a lot lots of auxiliary bits actually much more than a via II type methods and that shows up in the fact that we need something maybe like 50 bits per dimension actually to just send one image and so that means that this algorithm really does not make sense if you just want to send one data point but say if you wanted to send like use this algorithm for each frame in a long video in a movie or something like that then the initial overhead can be amortized
02:58:17
02:58:52
10697
10732
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10697s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
across all the different dreams so that is a caveat of this algorithm right so finally let's talk about some other things which are not exactly about bids back so well all these algorithms that we talked about so far basically fall into the to the framework of you pre-trained a generative model on on some training set which you assume is you know drawn from the same distribution as the test set that you want to compress and then and then you just devise a coding algorithm that matches the negative log likelihood of
02:58:52
02:59:34
10732
10774
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10732s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
that and that's how you go but they're actually the other types of algorithms which are quite successful in text compression which we actually all use like in gzip and zip and so on which learn online so they they don't you don't really pre train them on a certain data set they just just give it a file and it learns how to compress it online and it turns out that maybe theoretically these types of algorithms at least if you get them lots of resources can actually learn to compress any distribution so we call them
02:59:34
03:00:07
10774
10807
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10774s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
Universal codes so there's one algorithm called problem if LZ somebody's and which works like this so I'll just try to very quickly describe it you're basically trying to so here's a long string that you're trying to compress and the way it works is that when you try to compress but you're at some position in the file let let's see we're at this position of the file and we want to code the future so what you do is you basically try to find a string starting at this position eight which has already occurred in the past so we
03:00:07
03:00:55
10807
10855
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10807s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
here we have this screen AAC and then we see out in the past AAC occurred so let's just store the index into the past so this occurred one two three times steps into the past so let's just store this number three in the past and then we we also add into the next character which is B so that's that's basically how this works at this point C we see oh there's a string CAE in the future but of that string occurred in the past so let's just store the number three which indicates that you just need to jump three into the past to just copy that
03:00:55
03:01:34
10855
10894
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10855s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
string the past over so this is roughly how Lempel Ziv works you just look for matches that you're trying to compress and past and copy them over and so what why is this a good idea okay so why is this a good idea so just very roughly why this was a good idea if the source of symbols you see is independent then whatever symbol you're at right now will reoccur that will will actually reoccur if you wait long enough and the reoccurrence time is as a geometric distribution so the average reoccurrence time is just one of the probability of
03:01:34
03:02:22
10894
10942
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10894s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
the symbol so that's that and so the local civil rhythm says to just write down the time that the the write down the time that it that you have to look back to find the same symbol again so that's going to take log t bits where T is the time so on average it's just log log T bits which is log 1 over P of X so you can see that this goes to the entropy of the source so this is an interesting algorithm it's basically nearest neighbors and it's saying that if you run if you just memorize tons of data over time and you run nearest
03:02:22
03:02:59
10942
10979
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10942s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
neighbors then this is like the best learning algorithm that you can or this learning algorithm does work um it just might take a very long time to learn and you can see that it does take very long time to learn because template matching does not generalize okay so that was simple as if so I'll conclude by talking just giving you a taste of some very recent research on on deep learning and compression so by no means is this comprehensive or anything like that it's just to give you an idea of what might be out there so
03:02:59
03:03:38
10979
11018
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10979s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
this the authors of EBA NS released some new work and I clear this year where they show that you can train a fully convolutional deep latent variable model on small images and just because it's fully convolutional you can just run it on large images make sure that this works very well so this these are I think some of the best numbers on full resolution image net just by using this full accomplishment property these authors here describe a very intriguing alternative to bits back coding so they described what they called minimum
03:03:38
03:04:18
11018
11058
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11018s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
minimal random code learning which is a coding scheme for latent variable models which achieves the bits that code link without needing bits back so the way that works is that you know the encoder samples a lot of weights the number of leads and samples is 2 to the KL divergence between the encoder and D and and prior and then picks a random one I did just a uniform random one and the decoder can do the same thing if they share the same random number generator and so it turns out that this is a way to basically get a sort of a low bias
03:04:18
03:04:58
11058
11098
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11058s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
sample from cube by sampling a lot of these lanes and picking a random one and the number of bits you need to encode the index the the random one that you picked out of them it's just the KL divergence just log K so the visionary log K bits um so this achieves the bits back cut length without many bits back the trade-off is computational complexity because the concurrency could have to collect a lot of samples and finally there's this other paper which has a very different flavor from the ones that we were talking about this is a paper about
03:04:58
03:05:35
11098
11135
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11098s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
lossy compression where they come up with a recurrent encoder and decoder architecture for lossy compression on sequential data like videos say and the way it works is quite interesting um just the very high-level idea is that the encoder simulates the decoder so normally you would think that the encoder and decoder just operate independently and the encoder you know just doesn't worry about what the decoder is doing but if there's this time structure then the encoder can simulate what the decoder is doing sort
03:05:35
03:06:07
11135
11167
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11135s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
of one time step behind and based on that you can send extra information that will help the decoder or just send the right information that will help the decoder reconstruct the data just in the right way and they show how to write down a neural network architecture that captures this idea and optimizes for the resulting code length in this end-to-end way so that's quite a cool idea right yeah so that's that's all I have to say hopefully that was helpful that was great Jonathan what's it's all over time but I'm thinking maybe we can
03:06:07
03:06:49
11167
11209
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11167s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
spend a couple more minutes if people have some questions didn't want to ask as we wrap up here anyway also value of lecturing and I also did a bunch of questions in the chat to be able to do that in parallel to you making progress on lecture I had a question about ans so I still don't see what the connection between like oh it seems like ans was just like a little add-on to this lecture like what's the connection feels like I don't I don't really see why we need ans like why you can just use another yeah it's a there are ways of
03:06:49
03:07:28
11209
11248
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11209s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
combining Ritz back with arithmetic coding it's just sort of the popular recent thing to do is to combine bits back with ans and in the reason we do it is because you get a very clean algorithm that works very well so that's that's just them that was the motivation can't you use it sorry can't you use base back so do any encoding scheme yeah yeah you definitely can just particularly convenient because of the stack structure of ans and also because the NS does work well in practice um to use it with that social practical
03:07:28
03:08:06
11248
11286
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11248s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
reasons why a net using mmhmm yeah maybe also dumping in here if you look at the Brendan frame geoff hinton paper it managed to do compression with a VA e and arithmetic coding but it had a bunch of overhead that you encountered because I think they could a nice look cute and this back axle or like a stack and so there's an overhead occurred then look at the Townsend on paper you can see how to make it all compatible through using a Dennis and get much better efficiency and compression efficiency tanam the previous paper that uses arithmetic
03:08:06
03:08:49
11286
11329
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11286s
https://i.ytimg.com/vi/p…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
hi everyone welcome to lecture 12 of deep unsupervised learning spring 2020 today we'll cover representation learning in reinforcement learning first before I start I wanna give a big thank you to many colleagues and friends who have contributed to this lecture through sharing their insights illustrations slides and videos that all very directly contributed to well I'll be sharing in this lecture here today thank you so Terry what class has been about unsupervised learning today we're actually going to look at how one
00:00:00
00:00:41
0
41
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=0s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
surprise learning and reinforcement can be brought together in a way that inspires learning to make reinforced learning more efficient but we haven't covered reinforcement learning yet in this class and so what I'm first gonna do is step through some of the very basics of reinforced learning obviously can't cover everything it could be a course in itself but we'll go through some of the basics from the recent successes and then from there look at successes where unsurprised learning and then reinforced winning or brought
00:00:41
00:01:09
41
69
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=41s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
together so what is reinforcement lis reinforce learning is a problem setting where you have an agent agents supposed to take actions in the world ask the agent take actions the world will change so for example the world could be the robot body and the environment around the robot body and after the world has changed because of the agents action this process repeats over and over and over and the goal for the agent is to maximize reward collected in the process for example imagine our agent is supposed to control a self-driving car
00:01:09
00:01:42
69
102
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=69s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
then the reward might be positive for reaching destination and negative forgetting to an accident maybe our agent is a robot chef and then the road might be positive for a good meal even more positive for an excellent meal and negative for making a total mess in the kitchen and the goal in reinforcement learning is for this agent to figure out through its own trial and error how to get high reward so as the human designer who gives a specification with the statement I'd like high reward and you say reward is high for the things I just described
00:01:42
00:02:18
102
138
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=102s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
and then the agent specific items on how to achieve that another example could be a video game the score in the video game could be the reward and so the agents supposed to figure out how to play that game to maximize reward where are some challenges in reinforcement learning man let me contrast it with supervised links and supervised learning what happens is you have an input and a corresponding outputs and the way you supervise your learning systems bias named for this input that should be output for this other input that should be up and so
00:02:18
00:02:49
138
169
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=138s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
forth in reinforcement learning your robot chef might be busy in the kitchen for half hour comes out with its meal and you might say good or bad meal but that's not reflective of the last action their robots have took it's reflective that whole half and half an hour of working a kitchen that somehow result in a higher board or a low reward and now when that chef robot chef cooks multiple times and sometimes has good outcome sometimes bad outcomes you could start looking at what's common between the good outcomes what's common
00:02:49
00:03:21
169
201
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=169s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
among the bad outcomes that process is a seizing apart what might have positively concrete or might have negatively con treated that's solving the credit assignment problem it's one of the big challenges for a reinforcement winning agent another big challenge of stability when let's say you have an agent learned to fly a helicopter well helicopters are naturally unstable so if you're not careful during the learning process you might crash your system and I might just stop the whole thing another big challenge is
00:03:21
00:03:49
201
229
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=201s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
exploration for a reinforced training agent to learn to do things it actually if it actually doesn't know how to do anything has to try things it's never done before it has to explore and this is many many challenges when you try things you never tried before well how do you even know what you should be trying it could be so many things to try what's more interesting what's less interesting and it also brings back the stability challenge is how do you make sure we try something you don't destroy the system and so forth now one example
00:03:49
00:04:19
229
259
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=229s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
that many people would know in real life all three enforce planning is how to train a dog when you train a dog the dog is the reinforced running agent and you as a human provide rewards you might give the dog positive reward when it does well and negative reward when it does poorly and you don't control what the dog does it's not supervising against you cannot tell the dog do this do that do that all just all its muscle muscles will follow your commands no the dog will just do some stuff and you'll say good or bad depending on how happy
00:04:19
00:04:54
259
294
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=259s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
you are with what the dog did so one of the things that we want to do in today's lecture is give you a bit of overview of successes of reinforcement learning but then also from there look at limitations and take a look at how representation when it can help out so one of these successes of probably success that foot deep reinforcement on the map was in 2013 when deepmind came out with the DQ end results deepmind showed that it's possible for a neural network to learn to play a wide range of Atari games from its own trial and error
00:04:54
00:05:42
294
342
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=294s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
now this was a big surprise until then if you looked at reinforced learning results they would typically be on relatively small simple environments and the input would not be images the input to the agent would be a very well-crafted representation of whatever world the agent is in summarizing in a small set of features or state variables and so a big surprise all of a sudden reinforcement works with pixels as input from there a lot of progress was made of course including the progress list on this slide here a lot of it coming out
00:05:42
00:06:17
342
377
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=342s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
of also deep mind Berkley open AI and much much higher scores and faster learning has been achieved in the Atari was on on the Atari benchmark sense wasn't just Atari deep mind also should even learn to play the game of go long-standing challenge many people thought it would be another 20 years if we asked him in 2013 2014 but sure enough in 2015 a computer beat the world champion in doe then the first version alphago was a combination of imitation learning and reinforcement learning the second version alphago zero was pure reinforcement
00:06:17
00:06:52
377
412
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=377s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
learning it just learned from playing us itself and over time became better than the best human players and then was a big result in more advanced gameplay or video game play opening showed that the game of dota 2 can be mastered by reinforced learning engine into 2017 he was shown a reinforcement agent to master the one-on-one version of the game and be some of the best human players and then Lola was shown that reinforcement learning enables playing a very competitive not necessarily beating the human world champion team just yet but
00:06:52
00:07:34
412
454
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=412s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
at a very competitive level with some of the best human teams through pure reinforcement learning at Berkeley in parallel we're exploring reinforcement learning for robotic control and so here is actually some reinforcement in action thus far we just talked about results what does it look like what you see it in action here we see an agent that's learning to control and this kid is learning to run and we give it positive reward the more positive the more moves to the right and it's negative reward for falling up to
00:07:34
00:08:06
454
486
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=454s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
the ground and what we see is that over time it figures out a strategy a control policy that gets it to run off to the right now the beauty here is that it's able to learn this in a way that is not specific to this two legged robot meaning that we can run the exact same deep reinforced landing code and run it on the four-legged robot it'll learn to control this four-legged robots and in fact it can also learn to play at our games the exact same code in this case is precision policy optimization TRP o combined with generalized advanced
00:08:06
00:08:58
486
538
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=486s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
estimation tae that's able to learn to master all their skills in this case that the robots learning to get up the reward is based on how close the head is to standing head height the closer the head of standing head height the higher the reward and then this was generalized to a much wider range of skills so what you see here is reinforced winning edge that has massive very wide range of locomotion skills and then here we see it in action on a real robot this is Bret the Berkeley robots for the elimination of the tedious tasks because
00:08:58
00:09:33
538
573
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=538s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
we you must don't want to do the tedious tasks with want robots to do those students tasks for us when we see here is this robot is learning to put the Block in it imagine opening and indeed over time it figures out how to get the block did the matching opening now what it's doing under the hood it's learning a vision system and a control system all together to learn to complete this task what's the catch in all of this data inefficiency well monasteries achieved an Atari Ingo in robot locomotion robot manipulation
00:09:33
00:10:07
573
607
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=573s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
and so forth this mastery requires an enormous amount of trial and error and so the big question is can we somehow bring that down and reduce the amount of trial and error that's required to master these skills it turns out I believe in many others believe that representation learning can play a big role in getting there to get to much more efficient reinforcement learning and it's not something that is fully understood yet this is a domain with a lot of room for more research and so we'll cover today is a pretty wide range of highlights of
00:10:07
00:10:47
607
647
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=607s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
relatively recent results people have achieved by looking at combining representation learning with reinforcement to make RL reinforce learning more efficient we'll look at is long for directions auxilary losses state representation exploration and unsurprised field discovery and we'll unpack these as we go along one thing you'll notice is that it's not some kind of linearly building up thing and at the end you know it culminates in what is the most important piece really what we're going to be covering is a wide
00:10:47
00:11:20
647
680
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=647s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
range of highlights that each has their own interesting factors to it and probably a solution that will be the final solution in the future we'll combine ideas from many of the research results that we cover today into one system so let's start with those hilary losses the paper wanna start with is the unreal paper by deep mine so the idea here is done reinforce money as a sense can be very data hungry especially there's only sparse rewards and the question is can we make an agent learn more efficiently by having auxiliar
00:11:20
00:12:02
680
722
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=680s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
prediction and control test by having it not just learn from reward signal because it might be very sparse but have the agent absorb other signals of course nothing else we want to supervise as a human because then it becomes very tedious but self supervising those things that are available in the environment that the agent could try to learn from even if it's not exactly rewards signal so the unreal agent which stands for unsupervised reinforced planning and auxiliary learning showed it tenants improvement in data
00:12:02
00:12:38
722
758
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=722s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
efficiency over a three C which was can be standard RL approach deep mind abuses at the time on the 3d deep mind lab which is a navigation first-person vision navigation task and sixty percent improvement the final scores so faster learning and converging to better final scores so what does the architecture look like when we see at the top here in the middle is the base a QC agent so again this is not a reinforcement in lecture let me give you a little bit of background what's going on here in reinforced when you have experiences the
00:12:38
00:13:14
758
794
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=758s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
agent without any given time and has to make a decision taking the current input process it and then through the policy output try to make a decision on what to do and through the value function output try to predict how much reward is gonna get in the future from this moment in time onwards so there's two output predictions here and that's the standard base a through C editor already predicts two things how much reward that's V value the cumulative reward over time that's coming and policy PI action it should be take so both of those are
00:13:14
00:13:46
794
826
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=794s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
outputs of the same network that's the basis of this edge then all the data gets put into a replay buffer and it's reused in other ways and the same neural net that is the a through C agent will give it multiple heads even more head so it has to make even more predictions and by giving an additional prediction tasks if these prediction paths are related to learning to solve the original problem which is achieve high reward and these prediction tasks are real ended and hopefully it'll learn something that will transfer over to the real task who
00:13:46
00:14:19
826
859
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=826s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
care about and be able to learn the real task more quickly so what are these auxiliary tasks the first one is auxilary q functions so the idea here is you're given additional heads to the neural network that are q functions a q function is predicting for the current situation how much you work well I get in the future if I currently take a specific action so for each possible action I'll predict how much reward might I get now the interesting thing about Q function learning is that you can do Q function learning of policy
00:14:19
00:14:52
859
892
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=859s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
meaning you can try to solve one task but in the meantime do Q learning against the other task that has a different reward function and that's the key idea here we're gonna take reward functions that are not the ones we care about that are auxiliary reward functions that are easy to automatically extract from the environment and the Q&A against those who word functions and by doing so the core structure the core of the neural net will learn things that are also useful for the task we actually care about okay
00:14:52
00:15:23
892
923
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=892s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
so what's actually that a little deeper here so basically she agent is the core thing the sparse awards means you just I mean here's the cake from now on Laocoon you get this one T round the cake from the real reward that we care about that's not enough we want more rewards that's exactly what this cue function thing is going to do we're going to define many other rewards and there's many other rewards are going to allow us to learn from a lot more signal and if you only had our one reward okay so this reward function that was defined here by
00:15:23
00:16:00
923
960
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=923s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
the office of the paper is called a pixel control reward function and so what they do is they turn what the agency's first-person vision of the maze into a courser this gives grayscale and a representation of what it's seeing and you get rewarded in this auxilary reward task for how much you're able to change the discourse pixel value so what does that mean if your agent turns into a direction where things are much brighter than the direction they're facing right now then the pixel values will change a lot and that would be a high reward
00:16:00
00:16:39
960
999
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=960s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
thing or other way around if right now things look very bright in a certain pixel in it it in turns and makes that pixel darker that would be a high reward again that's not what we actually care about but it's a very simple auxilary loss that we can impose and that we can run q-learning against and so that's it also turns out that this is the one that mattered the most for improving the learning there are other Attilio losses with is the one that matters the most intuition here why this one matters the most is that in Q
00:16:39
00:17:10
999
1030
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=999s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
learning you are learning about the effect of your actions and so because you're learning about the effect of many possible actions you could take would be the Q value if I turn to the left well did you follow that turn to the right what if we the Q value I look up look down and you're really learning something about how the world works and not just how the world works with how your actions interact is what happens in the world another usually loss is reward prediction so what you do here is for your current pause that you're executing
00:17:10
00:17:40
1030
1060
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1030s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
you can try to predict a future time steps how much real reward you're going to get so maybe you get rewards for eating an apple and so when you see an apple in the distance you should be able to predict that if you keep running forward for three more steps you'll get that Apple and so learning to predict that in three steps you gonna get that Apple is an auxilary loss that's introduced here and then elastic zero loss is value function replay so it's saying from the current time how much reward am I going to get over the next
00:17:40
00:18:10
1060
1090
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1060s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
few steps so this applies in theory in the base every see agent actually all right so if you look at results here this is undefined lab which is that 3d navigation environment where you collect apples and other fruits as a reward we can look at different approaches so the bottom curve we are looking at is the base a tree C agent so that's the dark blue bottom curve and the hope is by having auxiliary losses we can do better if we incorporate all the ideas that we just covered you get the unreal agent you get this top lead curve here and now
00:18:10
00:18:49
1090
1129
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1090s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
we see here is various operations to see well which one of these matters the most which ones might not contribute very much what we see is if he does do the human 4 pixel control it's a loss that's the yellow curve you get almost all the juice of these area losses but if an addition you have the reward prediction and the value replayer you have yet a little better performance so actually another thing I want to highlight here the butts on the top of the graph here says average of the top three aging so there's a way to evaluate things in the
00:18:49
00:19:23
1129
1163
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1129s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
paper usually reinforcement learning because you need to explore and there's a lot of random is an exploration the results are somewhat unpredictable of meaning that some runs will be more lucky than other rooms it'll be high variance and so with the baby here they pick the top three grunts might say why the top three that's a lot crazy shouldn't you look at the average performance anything like that yeah you could argue should look at the average performance it's what's done in most papers that our thinking here
00:19:23
00:19:50
1163
1190
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1163s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
was then imagine what you're interested in is finding a policy and you have maybe a budget of doing 20 runs then maybe what matters is what's the best one among those 20 runs or it could be a little more robust aberrancy you know how do the best three runs do and so an approach where the best three runs are consistently great then that's an approach where if you won't afford 20 runs total you'll have a good one among them and so that's a it's kind of a funny way to score things but it happens to be how they do things in this paper
00:19:50
00:20:25
1190
1225
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1190s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
so another thing that compared with which we as unsupervised learning students of course we're very clear about if you do pixel control why not do feature control why not cue function for kind of later layers in the network if for later lays in the network I want to see if I take an action you know can I change the future value and maybe layer five or layer six instead of just pixel value change well we see a 3c plus feature control in green and it received plus pixel control in orange you can see the pixel control actually works better
00:20:25
00:21:01
1225
1261
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1225s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
of course this might depend on the environment this might depend on exactly only architecture that the experiments that we're done in this paper showed that pixel control actually slightly outperformed feature based control and again control here means it's the auxiliary loss using the auxiliary cue functions that the ultimate reward function that you're actually optimized for and score against on the vertical axis here is the real reward function of collecting the fruits in the maze then here are a couple of unsupervised RL
00:21:01
00:21:31
1261
1291
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1261s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
baselines so what are some other things we're looking at again a principal pixel control shown in yellow that's the top curve and both plots then input input change frequency just try to have an auxiliary law that says can I predict how what I see will change as a function of my action so that's really learning a dynamics model that's shown in blue and then shown in green is in Petri constructions that's a bit like a Dae I have an input make a little representation reconstructed back out and so what we see is that these
00:21:31
00:22:04
1291
1324
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1291s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
things that might seem maybe more natural and more advanced like infantry construction input change prediction are actually less effective than pixel control and of course I mean there could be many many factors have plenty here but the high level intuition that most people have is that the reason these are clearly cute functions work so well is that what's happening here is that as we as we work with exilic q functions we are we are actually we're actually learning about not just how the world works which is the input change
00:22:04
00:22:44
1324
1364
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1324s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
prediction but how we are able to affect what happens in the world and that's really what matters for learning to achieve high reward than the task you care about now domain they looked at rather than first-person maze navigations Montezuma's in French on this month's range of famous at our game where expression is very difficult there are many many rooms in every room there's complicated things you have to do collecting keys jumping over things they make one mistake you're dead and you start back at the beginning
00:22:44
00:23:16
1364
1396
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1364s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
shows you that unreal outperforms it you see quite quite a bit a 3 CR at the bottom I think black is really not getting anywhere where the unreal approach is actually doing a lot better now let's take a look at this maze navigation agent in action so this is deep mind lab let's take a look at the agent plane you agents collecting the apples here not collecting the lemons that's apparently it's not good in this particular game to collect the lemons and so this agent has learned to navigate mazes the way it can
00:23:16
00:24:02
1396
1442
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1396s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
do that by de Rais because it has a lsdm which gives a memory and so you can remember places been before and things has tried before to more efficiently find the next new location where there might be a fruit it hasn't collected yet and so the reason I'm showing these specific results here is because the space of well reinforced planning in general but especially representation and reinforcement learning does the evaluations aren't all in the same type of environments there's a lot of variation how these things get evaluated
00:24:02
00:24:37
1442
1477
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1442s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
and so having a good feel for what these experiments actually look like is important to get a sense for how advanced might this method really be and so we see here as well this first person navigation well that's pretty complicated so this might be a pretty advanced method that play here here we see a bit of an inside look into the agent itself where on the top right you see the pixel control Q values is something depending on which action that take this for actions available how much how high will my Q value be which really
00:24:37
00:25:10
1477
1510
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1477s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
is corresponds to an understanding of how the world works what will change in what I see as a function of actions I take all right so to summarize the unreal law since the original ATC loss which is a state policy gradient most value function loss then there is value replay loss which look seven replay about her Valarie prediction and then there's the pixel control two functions for different course pixels in the view of the agent and then finally there's the reward prediction loss small opening in the word prediction they ensured was
00:25:10
00:25:48
1510
1548
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1510s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
an equal portion of rewarding and non rewarding examples so a balanced training set the pixel control did split into 20 by 20 rid of cells all right so in the entire results we see that unreal also helps over a 3 C but not nearly as much as in the deep mind lab environments but still a significant improvement the vertical axis here is human normalized to form a server the way deep mind is evaluating this is they look at what you missed in 13 Natalie for every game that's gonna be a different score because every game is
00:25:48
00:26:23
1548
1583
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1548s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
very different scoring system and then they normalize it across games another total score across all games how well the agent learns so it is across many many Atari games in terms of Atari games on average how fast is the learning curve go up so you cannot overfeeding onto one game or the other and do well on this score you need to be able to learn well on all of the games to do well on this score alright and then also look here at robustness because there's many agents being trained and this top three curves on
00:26:23
00:26:55
1583
1615
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1583s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
this le expose performance of all the agents here is a little evaluation of robustness and we'll see that you know there's a bit of decaying performance not all agents learn equally well but it's not that there's just one death as well and then nobody else as well so this looks pretty good okay so that's the first thing I want to cover which is auxilary losses and unreal is a very good example of that does more work happening all the time in this space but that was kind of the big initial result that's showed this is something that can
00:26:55
00:27:24
1615
1644
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1615s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
be very beneficial let's switch gears to state representation an ominous will have many many subsections it turns out first one we have is how would go from observation to state so the the kind of paper that most people might be most familiar with is the world models paper by David ha and collaborators and here's very kind of a simple diagram showcasing what they investigated so what you have is you have an environment the environment leads to an observation in this case pixel values and that can be very high dimensional fear but I'll take
00:27:24
00:28:03
1644
1683
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1644s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
100 by 100 image that's 10,000 pixels that's a very high dimensional input then they say well that's kinda Michelin we want our agent to work on something lower dimensional because we know under the hood there is a state of the world and the state of dog might be summarized wearing just a small set of numbers maybe 10 20 30 numbers is enough so my agent shouldn't have to operate shouldn't do reinforcement on those 10,000 number input should be doing reinforced learning on dusty 30 number input and might be able to learn a lot
00:28:03
00:28:32
1683
1712
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1683s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
more quickly because credit assignment should be easier if we only have to look at 30 numbers instead of 10,000 numbers and so it is a you risk a strain of racial auto-encoder which will of course cover them in their release of this course to find a latent representation now from which we can reconstruct there is more but then we'll use the latent representation as the input to the reinforcement learning agent which now hopefully will be more efficient so will them do in this approach is train a recurrent neural network now learns to
00:28:32
00:29:06
1712
1746
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1712s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
predict the next latent state so what's gonna happen here is we're gonna learn a way to simulate how the world works in this recurrent neural network but not by directly simulating and pixel space but by simulating in its latent space which could go a lot faster if there's a lot lower dimension we don't have to render the world at all times we can just simulate how the latent variables evolve over time of course this will also depend on the actions thing so it's a Mulligan's interaction and previous latent state generate the next life in
00:29:06
00:29:41
1746
1781
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1746s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
state and of course you want it to be the case that that matches up with the actual next latent state that you're VA you would output when you get to observe the next live in state and then he actually gets fed into the environment also and this so you have kind of two paths here at the actual environment path and you have the RNN prediction path and you hope that they line up or you training really to make this line up the thing in blue is called the world model is a thing that looks at the latent state see turns it into by
00:29:41
00:30:15
1781
1815
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1781s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
looking at the action latent state turns it into next light instead alright so they looked at this in the comics of car racing so in the left you see the environment that drains the roads you're supposed to stay on the road here the way they were rewarded shut up and race down this road as quickly as possible this is from pixel input so you get a lot of numbers as input and somehow you hope that would get turned into effectively an understanding of roughly where the road is where your cars on that road and which direction it's
00:30:15
00:30:46
1815
1846
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1815s
https://i.ytimg.com/vi/Y…axresdefault.jpg
YqvhDPd1UEw
L12 Representation Learning for Reinforcement Learning --- CS294-158 UC Berkeley Spring 2020
facing your car and then understand how to steer it to go down the road as quickly as possible procedure that followed is that click 10,000 robots from a random policy and the trendler view to encode frames into Z space just thirty two dimensional z space so low dimensional compared to the pixel open space then they train a ironing model to predict next lengthen state from previous latent state action and there's this additional hit late instead inside the Arnim then they use evolution but it's just one of many possible RL
00:30:46
00:31:23
1846
1883
https://www.youtube.com/watch?v=YqvhDPd1UEw&t=1846s
https://i.ytimg.com/vi/Y…axresdefault.jpg