video_id
stringlengths
11
11
title
stringlengths
0
100
text
stringlengths
513
648
start_timestamp
stringlengths
8
8
end_timestamp
stringlengths
8
8
start_second
stringlengths
1
5
end_second
stringlengths
2
5
url
stringlengths
48
52
thumbnail
stringlengths
0
52
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
hi there today we're going to look at curl contrastive unsupervised representations for reinforcement learning by Aravind Sreenivas Michel Laskin and Petra Biel so this is a general framework for unsupervised representation learning for our L so let's untangle the title a little bit it is for reinforcement learning which it if you don't know what reinforcement learning is I've done a bunch of videos on are L afraid works so it's for general reinforcement learning that means it can be paired with almost any RL algorithm out there so we're not
00:00:00
00:00:42
0
42
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=0s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
going to you know dive into specific or allowed rooms today it is unsupervised which means it doesn't need any sort of labels and it also doesn't need a reward signal forum RL which is pretty cool because usually the entire RL pipelines rely on some sort of a reward or auxiliary reward signal now there is a training objective here but it doesn't have to do with the RL reward and then in the it is learning representations which means it learns it learns intermediate representations of the input data that is useful and in the end
00:00:42
00:01:23
42
83
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=42s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
it is contrastive and that is the the kind of secret sauce in here the training objective it's what's called contrastive learning and that's what we're going to spend most of our time on today exploring what that means alright so here's the general framework you can see it down here sorry about that so you can see that reinforcement learning is just a box which is we don't care about the RL algorithm you use that's just you know what what comes at the end what comes at the beginning oh here is the observation so the observation in an RL
00:01:23
00:02:04
83
124
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=83s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
algorithm is kind of fundamental now if someone explains RL to you or reinforcement learning usually what they'll say is there is some kind of actor and there is some kind of environment right and the environment will give you an observation right observation Oh which is some sort of let's say here is an image right so in this in this RL framework specifically the examples they give are of image based reinforcement learning so let's say the Atari game where you have this little spaceship here and there are meteorites up here and you need to shoot
00:02:04
00:02:48
124
168
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=124s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
them so there is a little shot here right you need to shoot those meteorites right so this is the observation oh and then as an age as an actor you have to come up with some sort of action and the actions here can be something like moved to the left move to the right press the button that you know does the shooting so you have to come up with an action somehow given this observation and then the environment will give you back a reward along with the next observation like the next frame of the game and you're gonna have to come up with
00:02:48
00:03:23
168
203
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=168s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
another action in response to that and the environments going to give you back another reward and the next observation and so on so what you want to do is you want to find a mapping from observation to action such that your reward is going to be as high as possible right this is the fundamental problem of RL and usually what people do is they take this act this mapping here from observation to action to be some sort of function some sort of function that is parameterised maybe and nowadays of course it's often a neural network but
00:03:23
00:04:02
203
242
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=203s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
you're trying to learn given the input observation what output action you need to do and you can think of the same here so you have this input observation up here and down here after the reinforcement learning the output is going to be an action right and so this this function we talked about up here is usually implemented sorry is usually implement as you put the observation into the r.l framework and then the RL framework learns this f of theta function to give you an action now here you can see the pipeline is a bit different we don't
00:04:02
00:04:39
242
279
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=242s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
want to shove the observation in directly right we don't want the observation directly but what we put into the RL framework is this Q thing now the Q is supposed to be a representation of the observation and a useful representation so if we think of this of this game here of this Atari game up here what could be the what could be a useful representation if if I had to craft one by hand how would I construct a useful representation keep in mind the representation the goal is to have a representation of the observation that is more useful to the
00:04:39
00:05:22
279
322
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=279s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
RL algorithm than just the pure pixels of the image right so if I have to craft a representation let's say it's a vector right let's say our our our representations need to be vectors what I would do is I would probably take the x and y coordinates of the little spaceship right x and y and put it in the vector that's pretty useful and then I would probably take the x and y coordinates of the meteorites that are around right let's say there are maximum two XY XY here I would probably take the angle right the angle where my spaceship
00:05:22
00:06:07
322
367
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=322s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
is pointing to that should be pretty useful because if I shoot I want to know where I shoot right so theta here and then probably maybe the X and y coordinate of the of the shot here of the red shot that I fired if there is one right also going to put that into my representation so x and y and maybe Delta X Delta Y something like this right so you can see if I had to handcraft something if I I can pretty much guarantee that if I put in this representation right here into the RL algorithm but put this in here it would turn out
00:06:07
00:06:52
367
412
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=367s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
guaranteed it would turn out to be a better or L agent that learns faster than if I put in the original observation which is the the pixel image of the game right because of course in order to play the game correctly in order to play the game to win you need to extract this information right you need to get our there's something like a spaceship there's something like meteorites this is all things that are elegant doesn't know her say and would have to learn from the pixels right but if I already give it the information
00:06:52
00:07:29
412
449
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=412s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
that is useful it can learn much faster all right so you can see if I handcraft a good representation it's pretty easy for the RL algorithm to improve now we want to come up with a framework that automatically comes up with a good representation right so it alleviates the RL algorithm here that reinforcement it alleviates that from learn from having to learn a good representation right it already is burdened with learning the what a good action is in any given situation right we want to alleviate it of the burden to also
00:07:29
00:08:10
449
490
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=449s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
extract useful information from the from the observation space right so how do we do this this is Q here is supposed to be exactly that it's supposed to be a good representation but not one that we handcrafted but a used with a technique that can be employed pretty much everywhere and the goal sorry that the secret sauce here is this contrastive loss thing okay this bombed contrastive learning is this this kind of magic thing that will make us good representations so what is contrastive learning in this case I'm
00:08:10
00:08:55
490
535
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=490s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
going to explain in this case for this kind of image based for image based reinforcement learning but just for image based neural networks how can we come up with a contrastive loss so you see there's kind of a two pipeline thing going on here there is like this and this and then one of them is going to be the good encoding all right so let's check it out let's say we have this image that we had before right draw it again this little spaceship this and this and so right and we want to we want to do this what we
00:08:55
00:09:50
535
590
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=535s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
need to do is we need to produce three different things from it we need to produce an anchor what's called an anchor so we need to produce a positive sample positive sample and we need to produce negative samples let's just go with one negative sample for now right so the goal is to come up with a task that where we produce our own labels right so we want since we're training a encoder and the encoder is a neural network that's parametrized we need some sort of loss function so the goal is to come up with a method where we can
00:09:50
00:10:31
590
631
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=590s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
create our own labels to a task but that we construct the task in a way such that the neural network has no choice but learn something meaningful even though we made the task of ourselves all right I hope this was kind of clear so how are we gonna do this our method of choice here is going to be random cropping now random cropping means that I just I take an image right and I crop a a piece from it so a smaller piece from the image I just take a view inside the image so in case of the anchor right I'm gonna draw the same picture here bear with me
00:10:31
00:11:16
631
676
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=631s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
I'm gonna draw the same picture here a couple of times this is all supposed to be the same picture and with the negative sample I'm just gonna leave it empty for now there are two meteorites two meteorites shot shot right so for the anchor we're going to actually not random crop but center crop right so we're going to take here the center image right so the assumption is kind of that if I Center if I Center crop I won't lose you know too much of the image I can actually make the crop bigger such that almost everything of
00:11:16
00:11:59
676
719
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=676s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
the image is somewhat contained in this and that yeah all right so this is going to be my anchor and then the positive sample is going to be a random crop of the same image so I'm just randomly going to select a same size same size section from that image let's say this is up right here all right and the negative sample is going to be around the crop from a different image right so a different image might be from the same game right but might be there is a meteorite here right and there is no shot I don't I don't shoot and I'm going
00:11:59
00:12:45
719
765
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=719s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
to take a random crop from this let's say I'm going to take a random crop here let's put a meteorite here as well just for fun all right so these are going to be our three samples and now the question is going to be if I give the anchor to the neural network I'm going to say I give you the anchor right but I'm also going to give you this and this thing and I'm not going to give any of this I'm just going to give whatever I cropped right so just just these things so I asked the neural network neural network I give you the
00:12:45
00:13:39
765
819
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=765s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
anchor now which one of these two which one of these two crops comes from the same image right so as human you look at this and if you just see the center crop you see oh okay down here there's this this tip of this thing and then there's the shot right and in relation to the shot there is a meteor here right and then you look at the second one and you say okay I don't see the spaceship but there's the same relation here from the shot to the meteor and I can kind of see the meteor up here and this also fits with that right and the the spaceship
00:13:39
00:14:18
819
858
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=819s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
must be you know down here somewhere and then I go over here and I try to do the same thing is okay here's the meteor and you know it it might be it might be in the original image it might be over here somewhere so that's possible I don't see it right that's possible but then there should be there should be a shot right somewhere here or sorry further up oops T there should be a shot somewhere here right I'm pretty sure because there's there's one over here and I don't see it right so I am fairly sure mr. tasks asked her that this image here
00:14:18
00:15:03
858
903
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=858s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
is the positive sample while this image here is the negative sample right so this is the task that you ask of the neural network give it the anchor and you ask which one of the of these two comes from the same image right this is called contrastive learning now is a bit more complicated in that of course what you do is you encode these things using neural networks and then so each of the things you encode so the anchor you're going to encode all of these things using a neural network right and then this is what's going to
00:15:03
00:15:50
903
950
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=903s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
become the query and these are becoming the keys so key one or key two and then you're going to feed it always two of them into a bilinear product right the bilinear product is simply you can think of it as an inner product in a perturbed space that you can learn so you're going to have this you have these two here these go into q WK one and then these two here sorry this and this go into q w k 2 now W here is a learnable parameter right so you have some freedom and then you basically take whichever one of those two is highest right so this might
00:15:50
00:16:39
950
999
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=950s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
be this high and this might only be this high and then you say aha cool this one's higher so this one must be the positive right and you train the W specifically to make this higher to make the positive ones higher and the negative ones a lower so this is a supervised learning task right where these things here are going to be the lockets or or the so their inner product but you basically then pick the one that is highest as a in a soft max way and they put this in the paper so if we go down here the objective that they use to
00:16:39
00:17:23
999
1043
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=999s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
do the contrastive learning is this one so as you can see it's a soft max like in multi-class classification of the inner product the bilinear product with the positive samples over the bilinear product with the positive samples plus the bilinear product with all of the negative samples so you're going to come up with more than one negative sample all right now the only thing left that we don't have here is that the encoding how you're going to come from the image space to this space here is going to be slightly different and depending on
00:17:23
00:18:09
1043
1089
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1043s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
whether you're talking on the anchor or on the what what are called the keys the things you compare to and this is out of a kind of a stability criterion you already maybe you don't you know like something like double q-learning or things like this it sometimes when you train with your own thing so in q-learning you're kind of trying to to come up with an actor and a critic or it's not the same thing but you're kind of using the same neural network twice in in your in your setup and then you compare the output stored to each other
00:18:09
00:18:53
1089
1133
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1089s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
which isn't you know it leads to instability so in our case we took it three times here or multiple times especially for the same objective here we have twice something that was encoded by the same neural networking isn't the two sides of this by linear product so if we were to use the same neural network that tends to be somewhat unstable so we have different neural networks one that will encode the query which is this F Q + 1 which will encode the keys sorry F ok now we don't want to learn to neural networks and that's why
00:18:53
00:19:36
1133
1176
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1133s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
there's a bit of a compromise where we say it is the same neural network but but basically this one is the one we learn and then we always every now and then we transfer over the parameters to that one and in fact each step we transfer over the parameters and do an exponential moving average with the parameters of this momentum encoder from the step before so the momentum encoder parameters are a moving average of the parameters of the query encoder and that is so you get kind of get the best of both worlds you don't
00:19:36
00:20:21
1176
1221
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1176s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
have to learn a second neural network but your second neural network is not the same as your first neural network but it it kind of lags behind but it is also it is also performing almost as well so that is um I don't know if that makes sense but it is the best I can to explain it so to recap you take your observation you encode it as a query sorry you crop crop here for your anchor that gets your query and then you random crop for your keys right into positive and negative samples right so random crop from the same observation or from
00:20:21
00:21:13
1221
1273
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1221s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
different observations right these become your positive and negative samples then you take you take me push this through your encoders for the query and for the keys respectively you end up with the Q which is the encoded anchor and the case which are the encoded positive and negative samples and then you learn you update this encoder here using the contrastive loss right and at the same time you feed the q you feed the q here into the reinforcement learning algorithm and you learn your reinforcement learning algorithm instead
00:21:13
00:22:01
1273
1321
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1273s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
of giving having the observation directly as an input here you now have the Q here as an input right that is it the reinforcement learning works exactly the same but except having the so input Oh you now have the representation input queue and you don't have to worry about anything else in terms of the reinforcement learning algorithm it stays exactly the same right the this whole thing here can actually run either in parallel or you can think of it before you can think of it off policy on policy it is sort of
00:22:01
00:22:41
1321
1361
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1321s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
modular how you how you fit this in it simply comes up with good representations so that is that is basically a deal here right and you hope that the whole procedure of this contrastive learning then gives you good representation of this anchor thing here if you encode that to the queue you hope that this representation now is a good representation as a basis for the RL algorithm and it turns out at least in their experiments it is so here you see the same thing they actually they do something more where in RL usually deal
00:22:41
00:23:21
1361
1401
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1361s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
with a stack of observations not just a single observation because so for example in Atari people always concatenate something like for the four last frames right and their their point is okay if we have this stack here if we do this data augmentation you know these crops we kind of need to do them consistently right we need to crop every single image at the same point for the query and also if we do a random crop let's say a random crop down here we need to do this same random crop for all of the of the stack of images here right
00:23:21
00:23:59
1401
1439
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1401s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
so um that that is kind of the additional thing they introduced it with respect to RL that deals with with stacked time frames but it's kind of the same the same diagram as above here right so they explained the the RL algorithms they use and exactly they're they're their thing and here you can see that anchor is a crop and the positive sample is a random crop from the same image this would be up here somewhere the anchor is cropped from the middle and then the negative would be a random crop from a different
00:23:59
00:24:42
1439
1482
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1439s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
image or a different stack of images they have a pseudo code here where that was pretty simple we'll just go through it quickly right you start off with fq + FK these are the encoders for the query and keys you start them off the same then you go through your data loader you do this random augmentation of your query and you keys and I don't not even sure if the random augmentation needs actually to be a central crop for the anchor or just two different two different crops from the same image that might be as
00:24:42
00:25:22
1482
1522
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1482s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
well so you know just I guess I guess it's a thing you could choose I don't know what exactly is the best thing alright then I forward the query through the FQ and I forward the keys through the FK then important I detach this so I don't train I don't want to train the FK I only want to train the FQ right then I do the bilinear product here with the W these these are the bilinear product and then I put this all of this into a cross entropy loss right in the end I update my fq m IW and i do this exponentially
00:25:22
00:26:13
1522
1573
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1522s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
moving average for my key encoder and they test on two different things they test on the deepmind control tasks and they always test 100 K time steps so their big point is data efficiency right they they claim they can use learn useful representations with not much data so the task is here how good are you at one 100 cases that time steps right so you don't you don't optimize until the end you just you get 100k time steps and then the question is how how good are you and the curl here outperforms all of the baselines handily in the deep mind
00:26:13
00:27:01
1573
1621
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1573s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
control tasks and it also outperforms a lot of the baselines in the Atari tasks and it actually if you look at the results it doesn't outperform everything but for example here the red is curl and the dashed gray is state as a si now state si si the important thing to note here is it has access to the state whereas curl only works from pixels right so that what I said before like if I had to craft the use for a presentation basically state si si has access to that and you see that in many of the tasks that the curl comes close
00:27:01
00:27:47
1621
1667
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1621s
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
or performs equally well to the state si si right so that's pretty impressive especially if you've took at pixel si si sorry which is the same algorithm but does not have access to the state just the pixels it often fails terribly right so um that is pretty interesting to see and even to me it's pretty interesting to see that this kind of this kind of algorithm this kind of self labeled algorithm comes up with such useful representations all right so I hope I have explained this satisfactorily and check out the paper for more experiments
00:27:47
00:28:35
1667
1715
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1667s
https://i.ytimg.com/vi/h…axresdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
on April 21st jurgen schmidhuber tweeted out stop crediting the wrong people for inventions made by others at least in science the facts will always win at the end as long as the facts have not yet won it is not yet the end no fancy award can ever change that hashtag it self-correcting science hashtag plagiarism and links to an article of his own website where he wrote critique of Honda Prize for dr. Hinton so this is on Schmidt Hoover's own website and it's by himself and don't you love this how to pronounce his
00:00:00
00:00:41
0
41
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=0s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
name jurgen schmidhuber you again sorry this is this is absolutely great so both actually Schmid over and Hinton are on Twitter you can tweet at them and follow them this article here is a basically a critique of the press release of Honda when they awarded geoff hinton for his achievements and it goes through it step by step and we won't look at the whole thing but just two for you to get the flavor so here honda says dr. Hinton has created a number of technologies that have enabled the broader application of
00:00:41
00:01:21
41
81
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=41s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
AI including the backpropagation algorithm that forms the basis of deep learning approach to AI and schmidhuber just goes off its he basically claims him while Hinton and his co-workers have made certain significant contributions to deep learning he claimed above is plain wrong right he did not invent back propagation the person who invented back propagation was settled in linear MA and the many papers he says basically many papers failed to cite linin MA and this who was the original inventor of back prop and so on and he go kind of goes
00:01:21
00:02:05
81
125
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=81s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
through a history of this and how it's even earlier I always have a bit of a trouble with claims like who invented what because when it is an algo them really the same thing right and when he when is it a variation on another algorithm and when is it something completely new it's never entirely clear but the the points here made that the things the backpropagation algorithm existed before Hinton and also that some of the papers some of the seminal papers did not cite the correct origin statement to in 2002 he
00:02:05
00:02:42
125
162
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=125s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
introduced the a fast learning algorithm for restricted Boltzmann machines that allowed them to learn a single layer of distributor representation without requiring any labeled data these methods allow deep learning to work better and they led to the current deep learning revolution and he is no dr. Hinton's interesting unsupervised pre training for deep neural networks was irrelevant for the current latif learning revolution in 2010 our team showed that the feed-forward networks can be trained by plain backprop do not at all require
00:02:42
00:03:16
162
196
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=162s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
pre training and he basically again says apart from this Hinton's unsupervised pretending was conceptually a rehash of my unsupervised pre training for deep recurrent neural networks so he you know as you know she made Ober has done a lot of work in recurrent neural networks and he basically says it it was just a rehash of his algorithm now I I have to say I have so first look first of all he he makes a point here right that we don't really do unsupervised pre-training him or until now of course but you like for to train an amnesty law
00:03:16
00:03:55
196
235
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=196s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
fighter you don't have to do that but it's also doubtful that this this was a step even though even if it wasn't on the exact path to the current situation it was a thing that got people excited maybe and so the critique is like half valid and also it doesn't help me to burn that he always compares it to his own things like it just it just like either criticized them for you know in general things but then avoid bringing your own things in because it just sounds like I did this before and also I read some papers
00:03:55
00:04:34
235
274
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=235s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
from from these times people just wrote papers sometimes I haven't read this specific one but sometimes people just wrote papers writing down their ideas like one could do this and this and this never doing any experiments or actually specifying exactly what they mean they just kind of wrote down a bunch of ideas and that got published especially like there's some some reinforcement learning papers where people are just like oh one I imagine agents doing this and learning from that so it is again it is never
00:04:34
00:05:11
274
311
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=274s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
really clear in ideas or just had by everyone I think people people mistake this that think that the ideas are unique it's not ideas that are unique many people have the same ideas but some there's also execution and exact formalization and so on and exact level of specificity this all of this is really hard and then the honda says in 2009 dr. Hinton and two of his students used multi-layer neural nets to make major breakthrough and speech recognition that led directly to greatly improved and this of course Schrader who goes off by
00:05:11
00:05:48
311
348
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=311s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
this because speech recognition is of course prime LS TM territory so you don't want to go near this and the Honda further says revolutionized computer vision by showing that deep learning worked far better than existing state of the art and again he says the basic ingredients were already there and so on and the our team in Switzerland already used his first superior award-winning GPU based CNN and so on that's what it's called dan net was produced by his group and again this seems correct right this seems when he lays it out like this but
00:05:48
00:06:32
348
392
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=348s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
it doesn't change the fact that Alex net1 imagenet in 2012 and that was like the start of the deep learning revolution it was like wow you can cut the learn like the error rate by something like 30% simply by doing this deep learning stuff so again even if Dan that he says it blew away the competition it just seems it it always seems like Schmidt Hooper's kinda right but then also he's not he's like a cadet exact academic work and and the idea being there on a paper isn't the only thing that drives progress and says to achieve
00:06:32
00:07:22
392
442
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=392s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
their dramatic results dr. Hinton also invented a widely used new method called dropout which reduces overfitting no like no and like no just no like randomly dropping parts in order to make something more robust that is surely not a new thing and he also says much early it is there's this stochastic Delta rule and so on and he also critiques that this paper did not cite this they just gave it the name right this is an idea that is kind of so simple that you you wouldn't even necessarily think about researching whether that has existed
00:07:22
00:08:08
442
488
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=442s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
already I think they just did it and then because it's a natural idea and then they gave it a name and the name stuck right it's not about the idea itself and then lastly they say of the countless AI based technological services across the world it is no exaggeration to say that few would have been possible without the results dr. Hinton created I love this name one that would not have been possible and he just gives a list of their own group and that are basically possible without Hinton's contributions and this is just it's a
00:08:08
00:08:47
488
527
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=488s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
bit of a cheap shot right clearly honda if they're not saying it would have been you know physically him possible without his contributions its but certainly Hinton has has if even if he hadn't invented any of those things he certainly has created like a spark and his these things created a splash got people excited people thinking about new ways of applying things even you know if this is all true so right and but but I would like you to I'd like you to notice this is a critique of what Honda says about Hinton and if I read
00:08:47
00:09:35
527
575
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=527s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
through the statements of Schmidt who were most of them are technically correct right and you know that so that was that and then I thought okay cool but then someone posted II didn't read it and then Hinton replies and this is okay don't you love this so Hinton says having a public debate with schmidhuber about academic credit is not at advisable because it just encourages him and there is no limit to the time and effort that he is willing to put into trying to discredit his perceived Arrivals he is even escorted to tricks
00:09:35
00:10:15
575
615
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=575s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
like having multiple aliases in Wikipedia to make it look as if other people agree the patient on his website about Alan Turing is a nice example of how he goes on trying to these are like these are shots fired and he says I'm going to respond once and only once I have never claimed that I invented backpropagation David Romo hard invented it independently after other after other people in other fields had invented it it's true when you first published we did not know the history so he basically says okay we did forget decided when we
00:10:15
00:10:56
615
656
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=615s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
first published about rock crop but he doesn't say he invented it what I've claimed is that I was the person to clearly demonstrate that back prop could learn interesting in turn represent and that that this is what made it popular right so this goes into into the direction schmidhuber is very much on academic contributions idea was there before and hint and basically says no what we did is kind of we showed that it works in this particular way and we can have got people excited about it I did is by forcing that blah blah blah and it
00:10:56
00:11:35
656
695
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=656s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
is he says it is true that many people in the press have said I invented back prop and I've spent a lot of time correcting them here's an excerpt from 2018 where this is I guess a quote from this book that quotes Hinton where he says lots of people invented different versions of back prop before day with normal heart they were mainly independent inventions something I feel I've got too much credit for it's one of these rare cases where an academic feels he has got too much credit for something my main contribution was to sure you can
00:11:35
00:12:08
695
728
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=695s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
use it for learning distributed representations so I'd like to set the record straight on that and then he said maybe Jurgen would like to set the record straight on who invented LST M's boom boom crazy shot shots fired by Hinton here this is I mean this is just great but again look at what Hinton says Hinton basically says yes I have not invented that I have corrected this on public record in the past and yeah so so that's what Hinton says and I mean the the the comments here are just gold I really invite you to read it and then
00:12:08
00:12:56
728
776
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=728s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
schmidhuber of course being Schmidt who replies again down here he has a a response to the reply and I don't expect Hinton to reply again so I waited for a bit but but I I believe him when he says he does it only once so he goes into this summary the facts presented in sections 1 2 3 4 5 are still valid so he goes what kind of statement by statements is having a public debate blah blah blah and he says this is an ad hominem attack which is true right this is true and he says he even has multiple aliases in Wikipedia
00:12:56
00:13:40
776
820
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=776s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
and he just says another ad hominem attack and then he goes into that schmidhuber tries to discredit Alan Turing and then shmita goes into this big long big long basically claim that Alan Turing wasn't as important as people made him out to be and people invented this kind of Turing machine equivalents before that again it's kind of showing tubers take that the idea basically was already there and these people don't get the correct credit and also he's correct that this is a this is a true it's an ad hominem attack right so you know be it
00:13:40
00:14:28
820
868
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=820s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
as it may this is correct and then when when Hinton goes that he doesn't stay and invent backdrop and me to persist this is finally response related to my post which is true right however he does not at all contradict what I wrote and it is true that he credited his co-author Rommel Hart with the invention but but neither cited alanine MA and also the statement lots of people he says it wasn't created by lots of different people but exactly one person so this I find como like can you really say now this is the
00:14:28
00:15:05
868
905
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=868s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
exact time when backprop was invented even though it probably wasn't in the current exact current formulation and it probably existed someone like this so but again and he his main claim is dr. Hinton except the Honda Prize although he apparently agrees that Honda's claims are false he should ask Honda to correct their statements and like in the end maybe you're going would like to set the record straight who invented LST M's and you know as we as you may know seppo writer it kind of invented LST ms under jurgen schmidhuber
00:15:05
00:15:48
905
948
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=905s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
as a as a PhD advisor but the to summarize dr. Hinton's comments and ad hominem arguments diverged from the contents of my post and do not challenge the facts and so on and i have to say after reading this this this is a this is correct right hinton basically replies to hey i I never claimed I invented back prop and other people have invented it and Schmidt Hoover doesn't criticize hinton in this particular post he may otherwise schmidhuber doesn't create as as Hinton for claiming that he criticizes Honda for claiming that
00:15:48
00:16:30
948
990
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=948s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
Hinton did and it doesn't hidden basically agrees with him and also schmidhuber says dr. Hinton accepted the Honda Prize although he apparently agrees that the claims are false he should ask Honda to correct their statements and it is true that Hinton accepted this price under this release right now you might be able to say him Hinton also says he's on the record basically saying he didn't do this and I guess if you're Hinton and you know you've had this you've had a successful career and so on and you have previously
00:16:30
00:17:02
990
1022
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=990s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
really publicly stated that you didn't invent these things and you know made it clear and then you get a prize and they write this thing maybe you just don't want to go after every single press statement and correcting that but you know in essence basically Hinton and understood this as an attack on himself that he claims he invented back prop and schmidhuber says Honda claims he invented back rub and Hinton accepted the price so agrees with it and he basically agrees with it but doesn't say Honda should have corrected at which I
00:17:02
00:17:40
1022
1060
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=1022s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
hDQNCWR3HLQ
[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
can understand so this is my take on this issue it's kind of both or correct and they just kind of talk past each other and schmidhuber is always on the the idea existed before and Hinton is correct when he says it's not always just about the idea progress is also made by people being excited people actually getting something to work people you know doing something at the right time in the right place which is also correct but it is fun it is fun so so I just I enjoyed I enjoy this honestly like because ultimately this is
00:17:40
00:18:28
1060
1108
https://www.youtube.com/watch?v=hDQNCWR3HLQ&t=1060s
https://i.ytimg.com/vi/h…LQ/hqdefault.jpg
_X0mgOOSpLU
The power of believing that you can improve | Carol Dweck
The power of yet. I heard about a high school in Chicago where students had to pass a certain number of courses to graduate, and if they didn't pass a course, they got the grade "Not Yet." And I thought that was fantastic, because if you get a failing grade, you think, I'm nothing, I'm nowhere. But if you get the grade "Not Yet", you understand that you're on a learning curve. It gives you a path into the future. "Not Yet" also gave me insight into a critical event early in my career, a real turning point. I wanted to see
00:00:00
00:00:56
0
56
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=0s
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
The power of believing that you can improve | Carol Dweck
how children coped with challenge and difficulty, so I gave 10-year-olds problems that were slightly too hard for them. Some of them reacted in a shockingly positive way. They said things like, "I love a challenge," or, "You know, I was hoping this would be informative." They understood that their abilities could be developed. They had what I call a growth mindset. But other students felt it was tragic, catastrophic. From their more fixed mindset perspective, their intelligence had been up for judgment, and they failed.
00:00:56
00:01:50
56
110
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=56s
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
The power of believing that you can improve | Carol Dweck
Instead of luxuriating in the power of yet, they were gripped in the tyranny of now. So what do they do next? I'll tell you what they do next. In one study, they told us they would probably cheat the next time instead of studying more if they failed a test. In another study, after a failure, they looked for someone who did worse than they did so they could feel really good about themselves. And in study after study, they have run from difficulty. Scientists measured the electrical activity from the brain as students confronted an error.
00:01:50
00:02:40
110
160
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=110s
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
The power of believing that you can improve | Carol Dweck
On the left, you see the fixed-mindset students. There's hardly any activity. They run from the error. They don't engage with it. But on the right, you have the students with the growth mindset, the idea that abilities can be developed. They engage deeply. Their brain is on fire with yet. They engage deeply. They process the error. They learn from it and they correct it. How are we raising our children? Are we raising them for now instead of yet? Are we raising kids who are obsessed with getting As? Are we raising kids who don't know how to dream big dreams?
00:02:40
00:03:32
160
212
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=160s
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
The power of believing that you can improve | Carol Dweck
Their biggest goal is getting the next A, or the next test score? And are they carrying this need for constant validation with them into their future lives? Maybe, because employers are coming to me and saying, "We have already raised a generation of young workers who can't get through the day without an award." So what can we do? How can we build that bridge to yet? Here are some things we can do. First of all, we can praise wisely, not praising intelligence or talent. That has failed. Don't do that anymore.
00:03:32
00:04:22
212
262
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=212s
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
The power of believing that you can improve | Carol Dweck
But praising the process that kids engage in, their effort, their strategies, their focus, their perseverance, their improvement. This process praise creates kids who are hardy and resilient. There are other ways to reward yet. We recently teamed up with game scientists from the University of Washington to create a new online math game that rewarded yet. In this game, students were rewarded for effort, strategy and progress. The usual math game rewards you for getting answers right, right now, but this game rewarded process.
00:04:22
00:05:10
262
310
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=262s
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
The power of believing that you can improve | Carol Dweck
And we got more effort, more strategies, more engagement over longer periods of time, and more perseverance when they hit really, really hard problems. Just the words "yet" or "not yet," we're finding, give kids greater confidence, give them a path into the future that creates greater persistence. And we can actually change students' mindsets. In one study, we taught them that every time they push out of their comfort zone to learn something new and difficult, the neurons in their brain can form new, stronger connections,
00:05:10
00:06:00
310
360
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=310s
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
The power of believing that you can improve | Carol Dweck
and over time, they can get smarter. Look what happened: In this study, students who were not taught this growth mindset continued to show declining grades over this difficult school transition, but those who were taught this lesson showed a sharp rebound in their grades. We have shown this now, this kind of improvement, with thousands and thousands of kids, especially struggling students. So let's talk about equality. In our country, there are groups of students who chronically underperform, for example, children in inner cities,
00:06:00
00:06:50
360
410
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=360s
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
The power of believing that you can improve | Carol Dweck
or children on Native American reservations. And they've done so poorly for so long that many people think it's inevitable. But when educators create growth mindset classrooms steeped in yet, equality happens. And here are just a few examples. In one year, a kindergarten class in Harlem, New York scored in the 95th percentile on the national achievement test. Many of those kids could not hold a pencil when they arrived at school. In one year, fourth-grade students in the South Bronx, way behind, became the number one fourth-grade class in the state of New York
00:06:50
00:07:51
410
471
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=410s
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
The power of believing that you can improve | Carol Dweck
on the state math test. In a year, to a year and a half, Native American students in a school on a reservation went from the bottom of their district to the top, and that district included affluent sections of Seattle. So the Native kids outdid the Microsoft kids. This happened because the meaning of effort and difficulty were transformed. Before, effort and difficulty made them feel dumb, made them feel like giving up, but now, effort and difficulty, that's when their neurons are making new connections, stronger connections.
00:07:51
00:08:50
471
530
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=471s
https://i.ytimg.com/vi/_…axresdefault.jpg
_X0mgOOSpLU
The power of believing that you can improve | Carol Dweck
That's when they're getting smarter. I received a letter recently from a 13-year-old boy. He said, "Dear Professor Dweck, I appreciate that your writing is based on solid scientific research, and that's why I decided to put it into practice. I put more effort into my schoolwork, into my relationship with my family, and into my relationship with kids at school, and I experienced great improvement in all of those areas. I now realize I've wasted most of my life." Let's not waste any more lives, because once we know
00:08:50
00:09:50
530
590
https://www.youtube.com/watch?v=_X0mgOOSpLU&t=530s
https://i.ytimg.com/vi/_…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
so then let's get started for today welcome to lecture 10 of cs2 9458 deep unsupervised learning now this lecture will be on compression before we dive into that a couple of logistical things there are main logistical things that are ahead of you are your project milestone which is a three-page Goldbach intermediate report is due on Monday so we must look forward to reading those giving you feedback in the days after the deadline so you can gonna make sure you're maximally on track for your full final project the
00:00:00
00:00:40
0
40
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=0s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
other thing that's coming up in two weeks we'll have our midterm which will figure out how to do it remote under the current circumstances but the main thing we'll do later this week is release a set of study materials for you that capture the core of the things covered in the class their very core compressive a little bit because of how much we're going to have you study because of course a more difficult semester than most you do outside circumstances so a relatively short study guide and it'll be a PDF with the questions and the
00:00:40
00:01:15
40
75
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=40s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
answers and so you'll know exactly what the questions can be and what the answers are that we expect you to get so that will come out later today or tomorrow for you to study link pause here and see if there's any questions about logistics oh and by the way this lecture is recorded so for some reason you you know don't like your voice to be heard just like with the in-class lectures that were recorded then please be aware of that alright then let's get started with the Contin first day so compression what is
00:01:15
00:01:55
75
115
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=75s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
it and why would we care in general and why would we there in this class so what is it it's an data you might want to reduce the number of bits for encoding a message a message could be an image you want to send or part of speech or maybe some music you want to send across on communication line and it's an original in a collisional format might take up a very large number of bits and you might want to be able to get that same information across by seeing last bits in the communication channel so what does it
00:01:55
00:02:34
115
154
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=115s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
look like you have some bit stream B on the left here so that's where you start out with then what happens next is you want to compress it and end up with a compressed version of that bit stream and the hope that that compressed version has lost its anatomy original so when you send a compressed extreme over a channel or stored on a hard drive or whatever you want to do with those bits in a more compressed way then it's ideally a lot less good but then when you want to use it later you should be able to expand it back out just the
00:02:34
00:03:08
154
188
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=154s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
compression into the original alright so why do we care well you could save time you could save bandwidth over communications channel you could save space when you're storing it so many reasons you might care about this from the AI point of view and part of why it's interesting for this class is that often the ability to compress data reflects understanding of the data by the system that compressed the data so if you throw this in that's really by compressing data that means that system somehow has absorbed an understanding of
00:03:08
00:03:44
188
224
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=188s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
the data so now there's two types of compression lossy versus lossless compression in this lecture we'll be fully focused on lossless compression where the original bits can be completely reconstructed on the output now sometimes in practice you might care about lossy compression you say well I don't need all the details back as long as I can save more bits I'm happy to lose some detail that would be loss of compression not the topic for this class but also a topic you might be interested in at some point so I want to make sure
00:03:44
00:04:15
224
255
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=224s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
you know it exists now one of the very interesting things with compression there are some prizes associated it so recently hutter I should increase the price used to be a 50,000 euro prize for compressing human knowledge and recently it went up to factor 10 it's now a five hundred thousand euro prize if you can compress human knowledge what does it mean more concretely so there's a one gigabyte file of I believe text and this file here and with nine and if you can compress that to less than one hundred sixteen megabytes you win the prize you
00:04:15
00:04:57
255
297
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=255s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
won this thing you cracked it the reason how to read out the surprise is not so much because it specifically wants that one gigabyte compressed into one sixteen megabytes because he believes now we go by it has interesting information that any system that can represent it as compactly as one sixteen megabytes must have made its hopefully that's what it thinks some AI advances to be able to do that it's pretty interesting here because unlike most things we've covered in this class and you'll see in any kind of machine learning there's no train at
00:04:57
00:05:29
297
329
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=297s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
st it's not that he asked you to send in a compressor and has a secret test set is gonna test your compressor on to see how it works no it's literally there's a 1 gigabyte file and if you can make it smaller small enough you win the prize what's gonna be able to decompress it so you gotta be able to effectively send him something that's 116 megabytes or less and includes the code for decoding back into the one gigabyte so you'd be sending effectively both the decoder program and some encoding of this monkey bad file together it would be able to
00:05:29
00:06:05
329
365
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=329s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
reconstruct the original 1 gigabyte file so very very specific problem there's no test said just that one training example but nobody's got them close to actually making this work so interesting challenge maybe something you want to think about at some point and see if it can make some progress then there's another compression challenge on images so this is often held at CDR the main conference for computer vision and so there's a workshop there that looks at how well your compressor and there it's really about a compressor that you send
00:06:05
00:06:38
365
398
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=365s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
in compressor of course and they have a secret asset on which they test how well you can compress and decompress the test examples so two very different challenges but both very much at the core what we're going to be thinking about today in lecture all right so why in this course it turns out that we've studied a lot of generic models in this course it also turns out that compression utilizes generative models so the better it narrative model the better the compression can be and in fact Jonathan who will cover the second
00:06:38
00:07:19
398
439
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=398s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
half of this lecture has made several breakthroughs in this PhD research showing how some of the state-of-the-art generative models can be converted into compression algorithms with the CIO narrative models under the hood such that you can get better compression now you might go get otherwise and we'll cover that later but so there's a very close connection between better generative models and better compression there be material we would recommend for this lecture is this PDF overview to the nice write-up that
00:07:19
00:07:55
439
475
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=439s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
covers the background on essentially information theory slack impression that we'll be covering in this lecture at least the first half in the second half will dive a lot more in the deep learning aspects and how and tied it into this so some applications you might have seen jarick file compression gz p z 7z a zip file systems various multimedia formats you might have seen a file - fake file gif file mp3 mp4 communications that maybe you don't see news anymore now but where compression played a big role in the
00:07:55
00:08:29
475
509
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=475s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
past fax modem Skype and so forth and all of these are examples of where here original information might have been represented with many many bits too large for you to a store on file in that format and because you can reduce them or they can get back out the original you can now store it more efficiently or send it more efficiently over get in line when he said it more official over communication line it can reduce both the amount of does he need to send then in the process also reduce the latency because it might be less
00:08:29
00:09:02
509
542
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=509s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
delay assuming you can decode quickly on the other side now maybe you might have followed this TV show called Silicon Valley it's uh well pretty finish I would say with many things that are maybe a little too close to home and too close to true but still pretty funny and if you watch that show on HBO you have noticed that actually the company the central company Pied Piper would they put forward as their product is the well a middle-out compression algorithm nobody knows what middle out this but they put forward a compression algorithm
00:09:02
00:09:41
542
581
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=542s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
and that's the secret sauce of their company turns out that some people really do this for their actual company so there's various startups out there that you invent don't disclose exactly what's under the hood but invent new compression algorithms using machine learning under the hood most likely to improve upon past state of garden compression now the specific point I shoulda named itself after the Silicon Valley show which was called the company's called Pied Piper and this is now called pact pie so there's actually
00:09:41
00:10:14
581
614
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=581s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
a real thing they presented at TechCrunch in 2015 now first question you might ask is can we have universal data compression so their fundamental question you'll see in this lecture a lot of questions we ask them to be very fundamental where we can give actually very very strong theoretical answers sometimes negative answers so can we come up with universal data impressive would that mean that would be can we come up with something that no matter what let's say filing you give it it can make it smaller and later decompress it
00:10:14
00:10:51
614
651
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=614s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
back out to the original well the things that's possible well okay let's see imagine you want to compress every possible to compress every possible possible bitstream they ever encounter okay so that's not possible no longer we can do this what's the intuition that should be simple we'll do a proof by contradiction suppose you have a universal data compression algorithm you that can compress every bit streams no matter what your feet it is gonna make it less bits and then go to decompress it back out later to the original okay now given
00:10:51
00:11:33
651
693
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=651s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
bit string B 0 you can compress it to get a smaller bit string B 1 if it strictly last bits otherwise it's not a universal compressor now B you want you can feed into it again it'll turn that into B 2 which is yet smaller you keep doing this you do this especially many times at some point you'll have a big string of size 0 at that point it's obvious you cannot recover what the earth will lost kiss you it could have been anything and everything gets turned into 0 you can get back out will int it so what this shows them assuming
00:11:33
00:12:09
693
729
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=693s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
somebody tells you I have a universal data compression already compress everything no problem here's a prove that this is actually not possible to prove it another way also another way to prove it is to do it by Counting you can say okay suppose your algorithm can compress all thousand histories ok how many thousand bit strings are there we throw the one thousand possible bit strings now if we can compress all them that means we can pick every one of them and turn them into something smaller and distinct smaller otherwise we cannot get the
00:12:09
00:12:47
729
767
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=729s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
original back out but if we look at what's possible with all possible shorter bit strings actually you cannot encode all two to the 1000 possible thousand bit strings so since we can't include all possible to the 1000 bit strings it means we cannot compress all of them so we have two different proves here to show that the universal data compression just not possible why is impossible and practice though even if you cannot universally compress everything well there are statistical patterns that you can exploit for
00:12:47
00:13:26
767
806
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=767s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
example here's a piece of text and I'll give you all a minute to read this text so as you're reading this text you'll you'll notice well likely you'll notice that there's something fun about the text and that the words are mostly misspelled but despite these was being misspelled it's how she's still very feasible to read this and effectively what it says is that most people have no problem reading a piece of text if for every word you keep the first two letters you keep the last two letters but then everything in between you can
00:13:26
00:14:12
806
852
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=806s
https://i.ytimg.com/vi/p…axresdefault.jpg

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
2
Add dataset card

Data Sourcing report

powered
by Spawning.ai

No elements in this dataset have been identified as either opted-out, or opted-in, by their creator.